This commit is contained in:
Tom Hite 2013-10-08 18:30:58 +00:00
commit 527a73cf4a
109 changed files with 2963 additions and 670 deletions

View File

@ -1,11 +1,113 @@
## 0.3.7 (unreleased)
IMPROVEMENTS:
* builder/openstack: Can now specify a project. [GH-382]
## 0.3.10 (unreleased)
BUG FIXES:
* builder/all: timeout waiting for SSH connection is a failure. [GH-491]
* builder/digitalocean: don't panic if erroneous API response doesn't
contain error message. [GH-492]
* builder/virtualbox: error if VirtualBox version cant be detected. [GH-488]
* builder/virtualbox: detect if vboxdrv isn't properly setup. [GH-488]
## 0.3.9 (October 2, 2013)
FEATURES:
* The Amazon chroot builder is now able to run without any `sudo` privileges
by using the "command_wrapper" configuration. [GH-430]
* Chef provisioner supports environments. [GH-483]
BUG FIXES:
* core: default user variable values don't need to be strings. [GH-456]
* builder/amazon-chroot: Fix errors with waitin for state change. [GH-459]
* builder/digitalocean: Use proper error message JSON key (DO API change).
* communicator/ssh: SCP uploads now work properly when directories
contain symlinks. [GH-449]
* provisioner/chef-solo: Data bags and roles path are now properly
populated when set. [GH-470]
* provisioner/shell: Windows line endings are actually properly changed
to Unix line endings. [GH-477]
## 0.3.8 (September 22, 2013)
FEATURES:
* core: You can now specify `only` and `except` configurations on any
provisioner or post-processor to specify a list of builds that they
are valid for. [GH-438]
* builders/virtualbox: Guest additions can be attached rather than uploaded,
easier to handle for Windows guests. [GH-405]
* provisioner/chef-solo: Ability to specify a custom Chef configuration
template.
* provisioner/chef-solo: Roles and data bags support. [GH-348]
IMPROVEMENTS:
* core: User variables can now be used for integer, boolean, etc.
values. [GH-418]
* core: Plugins made with incompatible versions will no longer load.
* builder/amazon/all: Interrupts work while waiting for AMI to be ready.
* provisioner/shell: Script line-endings are automatically converted to
Unix-style line-endings. Can be disabled by setting "binary" to "true".
[GH-277]
BUG FIXES:
* core: Set TCP KeepAlives on internally created RPC connections so that
they don't die. [GH-416]
* builder/amazon/all: While waiting for AMI, will detect "failed" state.
* builder/amazon/all: Waiting for state will detect if the resource (AMI,
instance, etc.) disappears from under it.
* builder/amazon/instance: Exclude only contents of /tmp, not /tmp
itself. [GH-437]
* builder/amazon/instance: Make AccessKey/SecretKey available to bundle
command even when they come from the environment. [GH-434]
* builder/virtualbox: F1-F12 and delete scancodes now work. [GH-425]
* post-processor/vagrant: Override configurations properly work. [GH-426]
* provisioner/puppet-masterless: Fix failure case when both facter vars
are used and prevent_sudo. [GH-415]
* provisioner/puppet-masterless: User variables now work properly in
manifest file and hiera path. [GH-448]
## 0.3.7 (September 9, 2013)
BACKWARDS INCOMPATIBILITIES:
* The "event_delay" option for the DigitalOcean builder is now gone.
The builder automatically waits for events to go away. Run your templates
through `packer fix` to get rid of these.
FEATURES:
* **NEW PROVISIONER:** `puppet-masterless`. You can now provision with
a masterless Puppet setup. [GH-234]
* New globally available template function: `uuid`. Generates a new random
UUID.
* New globally available template function: `isotime`. Generates the
current time in ISO standard format.
* New Amazon template function: `clean_ami_name`. Substitutes '-' for
characters that are illegal to use in an AMI name.
IMPROVEMENTS:
* builder/amazon/all: Ability to specify the format of the temporary
keypair created. [GH-389]
* builder/amazon/all: Support the NoDevice flag for block mappings. [GH-396]
* builder/digitalocean: Retry on any pending event errors.
* builder/openstack: Can now specify a project. [GH-382]
* builder/virtualbox: Can now attach hard drive over SATA. [GH-391]
* provisioner/file: Can now upload directories. [GH-251]
BUG FIXES:
* core: Detect if SCP is not enabled on the other side. [GH-386]
* builder/amazon/all: When copying AMI to multiple regions, copy
the metadata (tags and attributes) as well. [GH-388]
* builder/amazon/all: Fix panic case where eventually consistent
instance state caused an index out of bounds.
* builder/virtualbox: The `vm_name` setting now properly sets the OVF
name of the output. [GH-401]
* builder/vmware: Autoanswer VMware dialogs. [GH-393]
* command/inspect: Fix weird output for default values for optional vars.
## 0.3.6 (September 2, 2013)

View File

@ -47,7 +47,9 @@ it raises the chances we can quickly merge or address your contributions.
If you have never worked with Go before, you will have to complete the
following steps in order to be able to compile and test Packer.
1. Install Go. On a Mac, you can `brew install go`.
1. Install Go. On a Mac, you can `brew install go`. Make sure the Go
version is at least Go 1.1. Packer will not work with anything less than
Go 1.1.
2. Set and export the `GOPATH` environment variable. For example, you can
add `export GOPATH=$HOME/Documents/golang` to your `.bash_profile`.

View File

@ -2,16 +2,22 @@ NO_COLOR=\033[0m
OK_COLOR=\033[32;01m
ERROR_COLOR=\033[31;01m
WARN_COLOR=\033[33;01m
DEPS = $(go list -f '{{range .TestImports}}{{.}} {{end}}' ./...)
all: deps
@mkdir -p bin/
@echo "$(OK_COLOR)==> Building$(NO_COLOR)"
@./scripts/build.sh
@bash --norc -i ./scripts/build.sh
deps:
@echo "$(OK_COLOR)==> Installing dependencies$(NO_COLOR)"
@go get -d -v ./...
@go list -f '{{range .TestImports}}{{.}} {{end}}' ./... | xargs -n1 go get -d
@echo $(DEPS) | xargs -n1 go get -d
updatedeps:
@echo "$(OK_COLOR)==> Updating all dependencies$(NO_COLOR)"
@go get -d -v -u ./...
@echo $(DEPS) | xargs -n1 go get -d -u
clean:
@rm -rf bin/ local/ pkg/ src/ website/.sass-cache website/build
@ -23,4 +29,4 @@ test: deps
@echo "$(OK_COLOR)==> Testing Packer...$(NO_COLOR)"
go test ./...
.PHONY: all deps format test
.PHONY: all clean deps format test updatedeps

View File

@ -27,16 +27,19 @@ type Config struct {
awscommon.AMIConfig `mapstructure:",squash"`
ChrootMounts [][]string `mapstructure:"chroot_mounts"`
CommandWrapper string `mapstructure:"command_wrapper"`
CopyFiles []string `mapstructure:"copy_files"`
DevicePath string `mapstructure:"device_path"`
MountCommand string `mapstructure:"mount_command"`
MountPath string `mapstructure:"mount_path"`
SourceAmi string `mapstructure:"source_ami"`
UnmountCommand string `mapstructure:"unmount_command"`
tpl *packer.ConfigTemplate
}
type wrappedCommandTemplate struct {
Command string
}
type Builder struct {
config Config
runner multistep.Runner
@ -53,6 +56,7 @@ func (b *Builder) Prepare(raws ...interface{}) error {
return err
}
b.config.tpl.UserVars = b.config.PackerUserVars
b.config.tpl.Funcs(awscommon.TemplateFuncs)
// Defaults
if b.config.ChrootMounts == nil {
@ -77,18 +81,14 @@ func (b *Builder) Prepare(raws ...interface{}) error {
b.config.CopyFiles = []string{"/etc/resolv.conf"}
}
if b.config.MountCommand == "" {
b.config.MountCommand = "mount"
if b.config.CommandWrapper == "" {
b.config.CommandWrapper = "{{.Command}}"
}
if b.config.MountPath == "" {
b.config.MountPath = "packer-amazon-chroot-volumes/{{.Device}}"
}
if b.config.UnmountCommand == "" {
b.config.UnmountCommand = "umount"
}
// Accumulate any errors
errs := common.CheckUnusedConfig(md)
errs = packer.MultiErrorAppend(errs, b.config.AccessConfig.Prepare(b.config.tpl)...)
@ -126,10 +126,8 @@ func (b *Builder) Prepare(raws ...interface{}) error {
}
templates := map[string]*string{
"device_path": &b.config.DevicePath,
"mount_command": &b.config.MountCommand,
"source_ami": &b.config.SourceAmi,
"unmount_command": &b.config.UnmountCommand,
"device_path": &b.config.DevicePath,
"source_ami": &b.config.SourceAmi,
}
for n, ptr := range templates {
@ -166,12 +164,20 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
ec2conn := ec2.New(auth, region)
wrappedCommand := func(command string) (string, error) {
return b.config.tpl.Process(
b.config.CommandWrapper, &wrappedCommandTemplate{
Command: command,
})
}
// Setup the state bag and initial state for the steps
state := new(multistep.BasicStateBag)
state.Put("config", &b.config)
state.Put("ec2", ec2conn)
state.Put("hook", hook)
state.Put("ui", ui)
state.Put("wrappedCommand", CommandWrapper(wrappedCommand))
// Build the steps
steps := []multistep.Step{
@ -189,15 +195,14 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
&StepEarlyCleanup{},
&StepSnapshot{},
&StepRegisterAMI{},
&awscommon.StepAMIRegionCopy{
Regions: b.config.AMIRegions,
},
&awscommon.StepModifyAMIAttributes{
Description: b.config.AMIDescription,
Users: b.config.AMIUsers,
Groups: b.config.AMIGroups,
},
&awscommon.StepAMIRegionCopy{
Regions: b.config.AMIRegions,
Tags: b.config.AMITags,
},
&awscommon.StepCreateTags{
Tags: b.config.AMITags,
},

View File

@ -82,3 +82,14 @@ func TestBuilderPrepare_SourceAmi(t *testing.T) {
t.Errorf("err: %s", err)
}
}
func TestBuilderPrepare_CommandWrapper(t *testing.T) {
b := &Builder{}
config := testConfig()
config["command_wrapper"] = "echo hi; {{.Command}}"
err := b.Prepare(config)
if err != nil {
t.Errorf("err: %s", err)
}
}

View File

@ -0,0 +1,15 @@
package chroot
import (
"os/exec"
)
// CommandWrapper is a type that given a command, will possibly modify that
// command in-flight. This might return an error.
type CommandWrapper func(string) (string, error)
// ShellCommand takes a command string and returns an *exec.Cmd to execute
// it within the context of a shell (/bin/sh).
func ShellCommand(command string) *exec.Cmd {
return exec.Command("/bin/sh", "-c", command)
}

View File

@ -1,8 +1,10 @@
package chroot
import (
"fmt"
"github.com/mitchellh/packer/packer"
"io"
"io/ioutil"
"log"
"os"
"os/exec"
@ -13,16 +15,18 @@ import (
// Communicator is a special communicator that works by executing
// commands locally but within a chroot.
type Communicator struct {
Chroot string
Chroot string
CmdWrapper CommandWrapper
}
func (c *Communicator) Start(cmd *packer.RemoteCmd) error {
chrootCmdPath, err := exec.LookPath("chroot")
command, err := c.CmdWrapper(
fmt.Sprintf("chroot %s %s", c.Chroot, cmd.Command))
if err != nil {
return err
}
localCmd := exec.Command(chrootCmdPath, c.Chroot, "/bin/sh", "-c", cmd.Command)
localCmd := ShellCommand(command)
localCmd.Stdin = cmd.Stdin
localCmd.Stdout = cmd.Stdout
localCmd.Stderr = cmd.Stderr
@ -46,7 +50,7 @@ func (c *Communicator) Start(cmd *packer.RemoteCmd) error {
}
log.Printf(
"Chroot executation ended with '%d': '%s'",
"Chroot execution exited with '%d': '%s'",
exitStatus, cmd.Command)
cmd.SetExited(exitStatus)
}()
@ -57,49 +61,31 @@ func (c *Communicator) Start(cmd *packer.RemoteCmd) error {
func (c *Communicator) Upload(dst string, r io.Reader) error {
dst = filepath.Join(c.Chroot, dst)
log.Printf("Uploading to chroot dir: %s", dst)
f, err := os.Create(dst)
tf, err := ioutil.TempFile("", "packer-amazon-chroot")
if err != nil {
return fmt.Errorf("Error preparing shell script: %s", err)
}
defer os.Remove(tf.Name())
io.Copy(tf, r)
cpCmd, err := c.CmdWrapper(fmt.Sprintf("cp %s %s", tf.Name(), dst))
if err != nil {
return err
}
defer f.Close()
if _, err := io.Copy(f, r); err != nil {
return err
}
return nil
return ShellCommand(cpCmd).Run()
}
func (c *Communicator) UploadDir(dst string, src string, exclude []string) error {
walkFn := func(fullPath string, info os.FileInfo, err error) error {
if err != nil {
return err
}
path, err := filepath.Rel(src, fullPath)
if err != nil {
return err
}
for _, e := range exclude {
if e == path {
log.Printf("Skipping excluded file: %s", path)
return nil
}
}
dstPath := filepath.Join(dst, path)
f, err := os.Open(fullPath)
if err != nil {
return err
}
defer f.Close()
return c.Upload(dstPath, f)
// TODO: remove any file copied if it appears in `exclude`
chrootDest := filepath.Join(c.Chroot, dst)
log.Printf("Uploading directory '%s' to '%s'", src, chrootDest)
cpCmd, err := c.CmdWrapper(fmt.Sprintf("cp -R %s* %s", src, chrootDest))
if err != nil {
return err
}
log.Printf("Uploading directory '%s' to '%s'", src, dst)
return filepath.Walk(src, walkFn)
return ShellCommand(cpCmd).Run()
}
func (c *Communicator) Download(src string, w io.Writer) error {

View File

@ -0,0 +1 @@
package chroot

View File

@ -0,0 +1,44 @@
package chroot
import (
"fmt"
"io/ioutil"
"os"
"testing"
)
func TestCopyFile(t *testing.T) {
first, err := ioutil.TempFile("", "copy_files_test")
if err != nil {
t.Fatalf("couldn't create temp file.")
}
defer os.Remove(first.Name())
newName := first.Name() + "-new"
payload := "copy_files_test.go payload"
if _, err = first.WriteString(payload); err != nil {
t.Fatalf("Couldn't write payload to first file.")
}
first.Sync()
cmd := ShellCommand(fmt.Sprintf("cp %s %s", first.Name(), newName))
if err := cmd.Run(); err != nil {
t.Fatalf("Couldn't copy file")
}
defer os.Remove(newName)
second, err := os.Open(newName)
if err != nil {
t.Fatalf("Couldn't open copied file.")
}
defer second.Close()
var copiedPayload = make([]byte, len(payload))
if _, err := second.Read(copiedPayload); err != nil {
t.Fatalf("Couldn't open copied file for reading.")
}
if string(copiedPayload) != payload {
t.Fatalf("payload not copied.")
}
}

View File

@ -60,7 +60,8 @@ func (s *StepAttachVolume) Run(state multistep.StateBag) multistep.StepAction {
return nil, "", errors.New("No attachments on volume.")
}
return nil, resp.Volumes[0].Attachments[0].Status, nil
a := resp.Volumes[0].Attachments[0]
return a, a.Status, nil
},
}
@ -111,12 +112,12 @@ func (s *StepAttachVolume) CleanupFunc(state multistep.StateBag) error {
return nil, "", err
}
state := "detached"
if len(resp.Volumes[0].Attachments) > 0 {
state = resp.Volumes[0].Attachments[0].Status
v := resp.Volumes[0]
if len(v.Attachments) > 0 {
return v, v.Attachments[0].Status, nil
} else {
return v, "detached", nil
}
return nil, state, nil
},
}

View File

@ -15,10 +15,12 @@ func (s *StepChrootProvision) Run(state multistep.StateBag) multistep.StepAction
hook := state.Get("hook").(packer.Hook)
mountPath := state.Get("mount_path").(string)
ui := state.Get("ui").(packer.Ui)
wrappedCommand := state.Get("wrappedCommand").(CommandWrapper)
// Create our communicator
comm := &Communicator{
Chroot: mountPath,
Chroot: mountPath,
CmdWrapper: wrappedCommand,
}
// Provision

View File

@ -1,12 +1,11 @@
package chroot
import (
"bytes"
"fmt"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"io"
"log"
"os"
"path/filepath"
)
@ -23,6 +22,8 @@ func (s *StepCopyFiles) Run(state multistep.StateBag) multistep.StepAction {
config := state.Get("config").(*Config)
mountPath := state.Get("mount_path").(string)
ui := state.Get("ui").(packer.Ui)
wrappedCommand := state.Get("wrappedCommand").(CommandWrapper)
stderr := new(bytes.Buffer)
s.files = make([]string, 0, len(config.CopyFiles))
if len(config.CopyFiles) > 0 {
@ -32,8 +33,20 @@ func (s *StepCopyFiles) Run(state multistep.StateBag) multistep.StepAction {
chrootPath := filepath.Join(mountPath, path)
log.Printf("Copying '%s' to '%s'", path, chrootPath)
if err := s.copySingle(chrootPath, path); err != nil {
err := fmt.Errorf("Error copying file: %s", err)
cmdText, err := wrappedCommand(fmt.Sprintf("cp %s %s", path, chrootPath))
if err != nil {
err := fmt.Errorf("Error building copy command: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
stderr.Reset()
cmd := ShellCommand(cmdText)
cmd.Stderr = stderr
if err := cmd.Run(); err != nil {
err := fmt.Errorf(
"Error copying file: %s\nnStderr: %s", err, stderr.String())
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
@ -54,11 +67,18 @@ func (s *StepCopyFiles) Cleanup(state multistep.StateBag) {
}
}
func (s *StepCopyFiles) CleanupFunc(multistep.StateBag) error {
func (s *StepCopyFiles) CleanupFunc(state multistep.StateBag) error {
wrappedCommand := state.Get("wrappedCommand").(CommandWrapper)
if s.files != nil {
for _, file := range s.files {
log.Printf("Removing: %s", file)
if err := os.Remove(file); err != nil {
localCmdText, err := wrappedCommand(fmt.Sprintf("rm -f %s", file))
if err != nil {
return err
}
localCmd := ShellCommand(localCmdText)
if err := localCmd.Run(); err != nil {
return err
}
}
@ -67,41 +87,3 @@ func (s *StepCopyFiles) CleanupFunc(multistep.StateBag) error {
s.files = nil
return nil
}
func (s *StepCopyFiles) copySingle(dst, src string) error {
// Stat the src file so we can copy the mode later
srcInfo, err := os.Stat(src)
if err != nil {
return err
}
// Remove any existing destination file
if err := os.Remove(dst); err != nil {
return err
}
// Copy the files
srcF, err := os.Open(src)
if err != nil {
return err
}
defer srcF.Close()
dstF, err := os.Create(dst)
if err != nil {
return err
}
defer dstF.Close()
if _, err := io.Copy(dstF, srcF); err != nil {
return err
}
dstF.Close()
// Match the mode
if err := os.Chmod(dst, srcInfo.Mode()); err != nil {
return err
}
return nil
}

View File

@ -75,7 +75,8 @@ func (s *StepCreateVolume) Run(state multistep.StateBag) multistep.StepAction {
return nil, "", err
}
return nil, resp.Volumes[0].Status, nil
v := resp.Volumes[0]
return v, v.Status, nil
},
}

View File

@ -7,7 +7,6 @@ import (
"github.com/mitchellh/packer/packer"
"log"
"os"
"os/exec"
"path/filepath"
)
@ -28,6 +27,7 @@ func (s *StepMountDevice) Run(state multistep.StateBag) multistep.StepAction {
config := state.Get("config").(*Config)
ui := state.Get("ui").(packer.Ui)
device := state.Get("device").(string)
wrappedCommand := state.Get("wrappedCommand").(CommandWrapper)
mountPath, err := config.tpl.Process(config.MountPath, &mountPathData{
Device: filepath.Base(device),
@ -59,8 +59,16 @@ func (s *StepMountDevice) Run(state multistep.StateBag) multistep.StepAction {
ui.Say("Mounting the root device...")
stderr := new(bytes.Buffer)
mountCommand := fmt.Sprintf("%s %s %s", config.MountCommand, device, mountPath)
cmd := exec.Command("/bin/sh", "-c", mountCommand)
mountCommand, err := wrappedCommand(
fmt.Sprintf("mount %s %s", device, mountPath))
if err != nil {
err := fmt.Errorf("Error creating mount command: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
cmd := ShellCommand(mountCommand)
cmd.Stderr = stderr
if err := cmd.Run(); err != nil {
err := fmt.Errorf(
@ -90,12 +98,16 @@ func (s *StepMountDevice) CleanupFunc(state multistep.StateBag) error {
return nil
}
config := state.Get("config").(*Config)
ui := state.Get("ui").(packer.Ui)
ui.Say("Unmounting the root device...")
wrappedCommand := state.Get("wrappedCommand").(CommandWrapper)
unmountCommand := fmt.Sprintf("%s %s", config.UnmountCommand, s.mountPath)
cmd := exec.Command("/bin/sh", "-c", unmountCommand)
ui.Say("Unmounting the root device...")
unmountCommand, err := wrappedCommand(fmt.Sprintf("umount %s", s.mountPath))
if err != nil {
return fmt.Errorf("Error creating unmount command: %s", err)
}
cmd := ShellCommand(unmountCommand)
if err := cmd.Run(); err != nil {
return fmt.Errorf("Error unmounting root device: %s", err)
}

View File

@ -6,7 +6,6 @@ import (
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"os"
"os/exec"
)
// StepMountExtra mounts the attached device.
@ -21,6 +20,7 @@ func (s *StepMountExtra) Run(state multistep.StateBag) multistep.StepAction {
config := state.Get("config").(*Config)
mountPath := state.Get("mount_path").(string)
ui := state.Get("ui").(packer.Ui)
wrappedCommand := state.Get("wrappedCommand").(CommandWrapper)
s.mounts = make([]string, 0, len(config.ChrootMounts))
@ -42,13 +42,19 @@ func (s *StepMountExtra) Run(state multistep.StateBag) multistep.StepAction {
ui.Message(fmt.Sprintf("Mounting: %s", mountInfo[2]))
stderr := new(bytes.Buffer)
mountCommand := fmt.Sprintf(
"%s %s %s %s",
config.MountCommand,
mountCommand, err := wrappedCommand(fmt.Sprintf(
"mount %s %s %s",
flags,
mountInfo[1],
innerPath)
cmd := exec.Command("/bin/sh", "-c", mountCommand)
innerPath))
if err != nil {
err := fmt.Errorf("Error creating mount command: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
cmd := ShellCommand(mountCommand)
cmd.Stderr = stderr
if err := cmd.Run(); err != nil {
err := fmt.Errorf(
@ -79,15 +85,18 @@ func (s *StepMountExtra) CleanupFunc(state multistep.StateBag) error {
return nil
}
config := state.Get("config").(*Config)
wrappedCommand := state.Get("wrappedCommand").(CommandWrapper)
for len(s.mounts) > 0 {
var path string
lastIndex := len(s.mounts) - 1
path, s.mounts = s.mounts[lastIndex], s.mounts[:lastIndex]
unmountCommand := fmt.Sprintf("%s %s", config.UnmountCommand, path)
unmountCommand, err := wrappedCommand(fmt.Sprintf("umount %s", path))
if err != nil {
return fmt.Errorf("Error creating unmount command: %s", err)
}
stderr := new(bytes.Buffer)
cmd := exec.Command("/bin/sh", "-c", unmountCommand)
cmd := ShellCommand(unmountCommand)
cmd.Stderr = stderr
if err := cmd.Run(); err != nil {
return fmt.Errorf(

View File

@ -52,8 +52,16 @@ func (s *StepRegisterAMI) Run(state multistep.StateBag) multistep.StepAction {
state.Put("amis", amis)
// Wait for the image to become ready
stateChange := awscommon.StateChangeConf{
Conn: ec2conn,
Pending: []string{"pending"},
Target: "available",
Refresh: awscommon.AMIStateRefreshFunc(ec2conn, registerResp.ImageId),
StepState: state,
}
ui.Say("Waiting for AMI to become ready...")
if err := awscommon.WaitForAMI(ec2conn, registerResp.ImageId); err != nil {
if _, err := awscommon.WaitForState(&stateChange); err != nil {
err := fmt.Errorf("Error waiting for AMI: %s", err)
state.Put("error", err)
ui.Error(err.Error())

View File

@ -51,7 +51,8 @@ func (s *StepSnapshot) Run(state multistep.StateBag) multistep.StepAction {
return nil, "", errors.New("No snapshots found.")
}
return nil, resp.Snapshots[0].Status, nil
s := resp.Snapshots[0]
return s, s.Status, nil
},
}

View File

@ -18,7 +18,14 @@ type AccessConfig struct {
// Auth returns a valid aws.Auth object for access to AWS services, or
// an error if the authentication couldn't be resolved.
func (c *AccessConfig) Auth() (aws.Auth, error) {
return aws.GetAuth(c.AccessKey, c.SecretKey)
auth, err := aws.GetAuth(c.AccessKey, c.SecretKey)
if err == nil {
// Store the accesskey and secret that we got...
c.AccessKey = auth.AccessKey
c.SecretKey = auth.SecretKey
}
return auth, err
}
// Region returns the aws.Region object for access to AWS services, requesting

View File

@ -1,30 +0,0 @@
package common
import (
"github.com/mitchellh/goamz/ec2"
"log"
"time"
)
// WaitForAMI waits for the given AMI ID to become ready.
func WaitForAMI(c *ec2.EC2, imageId string) error {
for {
imageResp, err := c.Images([]string{imageId}, ec2.NewFilter())
if err != nil {
if ec2err, ok := err.(*ec2.Error); ok && ec2err.Code == "InvalidAMIID.NotFound" {
log.Println("AMI not found, probably state issues on AWS side. Trying again.")
continue
}
return err
}
if imageResp.Images[0].State == "available" {
return nil
}
log.Printf("Image in state %s, sleeping 2s before checking again",
imageResp.Images[0].State)
time.Sleep(2 * time.Second)
}
}

View File

@ -2,6 +2,7 @@ package common
import (
"fmt"
"github.com/mitchellh/goamz/aws"
"github.com/mitchellh/goamz/ec2"
"github.com/mitchellh/packer/packer"
"log"
@ -51,9 +52,10 @@ func (a *Artifact) String() string {
func (a *Artifact) Destroy() error {
errors := make([]error, 0)
for _, imageId := range a.Amis {
log.Printf("Deregistering image ID: %s", imageId)
if _, err := a.Conn.DeregisterImage(imageId); err != nil {
for region, imageId := range a.Amis {
log.Printf("Deregistering image ID (%s) from region (%s)", imageId, region)
regionconn := ec2.New(a.Conn.Auth, aws.Regions[region])
if _, err := regionconn.DeregisterImage(imageId); err != nil {
errors = append(errors, err)
}

View File

@ -13,11 +13,12 @@ type BlockDevice struct {
VolumeSize int64 `mapstructure:"volume_size"`
DeleteOnTermination bool `mapstructure:"delete_on_termination"`
IOPS int64 `mapstructure:"iops"`
NoDevice bool `mapstructure:"no_device"`
}
type BlockDevices struct {
AMIMappings []BlockDevice `mapstructure:"ami_block_device_mappings,squash"`
LaunchMappings []BlockDevice `mapstructure:"launch_block_device_mappings,squash"`
AMIMappings []BlockDevice `mapstructure:"ami_block_device_mappings"`
LaunchMappings []BlockDevice `mapstructure:"launch_block_device_mappings"`
}
func buildBlockDevices(b []BlockDevice) []ec2.BlockDeviceMapping {
@ -32,6 +33,7 @@ func buildBlockDevices(b []BlockDevice) []ec2.BlockDeviceMapping {
VolumeSize: blockDevice.VolumeSize,
DeleteOnTermination: blockDevice.DeleteOnTermination,
IOPS: blockDevice.IOPS,
NoDevice: blockDevice.NoDevice,
})
}
return blockDevices

View File

@ -11,17 +11,18 @@ import (
// RunConfig contains configuration for running an instance from a source
// AMI and details on how to access that launched image.
type RunConfig struct {
SourceAmi string `mapstructure:"source_ami"`
IamInstanceProfile string `mapstructure:"iam_instance_profile"`
InstanceType string `mapstructure:"instance_type"`
UserData string `mapstructure:"user_data"`
UserDataFile string `mapstructure:"user_data_file"`
RawSSHTimeout string `mapstructure:"ssh_timeout"`
SSHUsername string `mapstructure:"ssh_username"`
SSHPort int `mapstructure:"ssh_port"`
SecurityGroupId string `mapstructure:"security_group_id"`
SubnetId string `mapstructure:"subnet_id"`
VpcId string `mapstructure:"vpc_id"`
SourceAmi string `mapstructure:"source_ami"`
IamInstanceProfile string `mapstructure:"iam_instance_profile"`
InstanceType string `mapstructure:"instance_type"`
UserData string `mapstructure:"user_data"`
UserDataFile string `mapstructure:"user_data_file"`
RawSSHTimeout string `mapstructure:"ssh_timeout"`
SSHUsername string `mapstructure:"ssh_username"`
SSHPort int `mapstructure:"ssh_port"`
SecurityGroupId string `mapstructure:"security_group_id"`
SubnetId string `mapstructure:"subnet_id"`
TemporaryKeyPairName string `mapstructure:"temporary_key_pair_name"`
VpcId string `mapstructure:"vpc_id"`
// Unexported fields that are calculated from others
sshTimeout time.Duration
@ -45,6 +46,10 @@ func (c *RunConfig) Prepare(t *packer.ConfigTemplate) []error {
c.RawSSHTimeout = "1m"
}
if c.TemporaryKeyPairName == "" {
c.TemporaryKeyPairName = "packer {{uuid}}"
}
// Validation
var err error
errs := make([]error, 0)
@ -69,14 +74,15 @@ func (c *RunConfig) Prepare(t *packer.ConfigTemplate) []error {
}
templates := map[string]*string{
"iam_instance_profile": &c.IamInstanceProfile,
"instance_type": &c.InstanceType,
"ssh_timeout": &c.RawSSHTimeout,
"security_group_id": &c.SecurityGroupId,
"ssh_username": &c.SSHUsername,
"source_ami": &c.SourceAmi,
"subnet_id": &c.SubnetId,
"vpc_id": &c.VpcId,
"iam_instance_profile": &c.IamInstanceProfile,
"instance_type": &c.InstanceType,
"ssh_timeout": &c.RawSSHTimeout,
"security_group_id": &c.SecurityGroupId,
"ssh_username": &c.SSHUsername,
"source_ami": &c.SourceAmi,
"subnet_id": &c.SubnetId,
"temporary_key_pair_name": &c.TemporaryKeyPairName,
"vpc_id": &c.VpcId,
}
for n, ptr := range templates {

View File

@ -126,3 +126,15 @@ func TestRunConfigPrepare_UserDataFile(t *testing.T) {
t.Fatalf("err: %s", err)
}
}
func TestRunConfigPrepare_TemporaryKeyPairName(t *testing.T) {
c := testConfig()
c.TemporaryKeyPairName = ""
if err := c.Prepare(nil); len(err) != 0 {
t.Fatalf("err: %s", err)
}
if c.TemporaryKeyPairName == "" {
t.Fatal("keypair empty")
}
}

View File

@ -30,6 +30,32 @@ type StateChangeConf struct {
Target string
}
// AMIStateRefreshFunc returns a StateRefreshFunc that is used to watch
// an AMI for state changes.
func AMIStateRefreshFunc(conn *ec2.EC2, imageId string) StateRefreshFunc {
return func() (interface{}, string, error) {
resp, err := conn.Images([]string{imageId}, ec2.NewFilter())
if err != nil {
if ec2err, ok := err.(*ec2.Error); ok && ec2err.Code == "InvalidAMIID.NotFound" {
// Set this to nil as if we didn't find anything.
resp = nil
} else {
log.Printf("Error on AMIStateRefresh: %s", err)
return nil, "", err
}
}
if resp == nil || len(resp.Images) == 0 {
// Sometimes AWS has consistency issues and doesn't see the
// AMI. Return an empty state.
return nil, "", nil
}
i := resp.Images[0]
return i, i.State, nil
}
}
// InstanceStateRefreshFunc returns a StateRefreshFunc that is used to watch
// an EC2 instance.
func InstanceStateRefreshFunc(conn *ec2.EC2, i *ec2.Instance) StateRefreshFunc {
@ -40,6 +66,12 @@ func InstanceStateRefreshFunc(conn *ec2.EC2, i *ec2.Instance) StateRefreshFunc {
return nil, "", err
}
if len(resp.Reservations) == 0 || len(resp.Reservations[0].Instances) == 0 {
// Sometimes AWS just has consistency issues and doesn't see
// our instance yet. Return an empty state.
return nil, "", nil
}
i = &resp.Reservations[0].Instances[0]
return i, i.State.Name, nil
}
@ -50,6 +82,8 @@ func InstanceStateRefreshFunc(conn *ec2.EC2, i *ec2.Instance) StateRefreshFunc {
func WaitForState(conf *StateChangeConf) (i interface{}, err error) {
log.Printf("Waiting for state to become: %s", conf.Target)
notfoundTick := 0
for {
var currentState string
i, currentState, err = conf.Refresh()
@ -57,27 +91,39 @@ func WaitForState(conf *StateChangeConf) (i interface{}, err error) {
return
}
if currentState == conf.Target {
return
}
if conf.StepState != nil {
if _, ok := conf.StepState.GetOk(multistep.StateCancelled); ok {
return nil, errors.New("interrupted")
if i == nil {
// If we didn't find the resource, check if we have been
// not finding it for awhile, and if so, report an error.
notfoundTick += 1
if notfoundTick > 20 {
return nil, errors.New("couldn't find resource")
}
}
} else {
// Reset the counter for when a resource isn't found
notfoundTick = 0
found := false
for _, allowed := range conf.Pending {
if currentState == allowed {
found = true
break
if currentState == conf.Target {
return
}
}
if !found {
fmt.Errorf("unexpected state '%s', wanted target '%s'", currentState, conf.Target)
return
if conf.StepState != nil {
if _, ok := conf.StepState.GetOk(multistep.StateCancelled); ok {
return nil, errors.New("interrupted")
}
}
found := false
for _, allowed := range conf.Pending {
if currentState == allowed {
found = true
break
}
}
if !found {
fmt.Errorf("unexpected state '%s', wanted target '%s'", currentState, conf.Target)
return
}
}
time.Sleep(2 * time.Second)

View File

@ -10,7 +10,6 @@ import (
type StepAMIRegionCopy struct {
Regions []string
Tags map[string]string
}
func (s *StepAMIRegionCopy) Run(state multistep.StateBag) multistep.StepAction {
@ -41,33 +40,23 @@ func (s *StepAMIRegionCopy) Run(state multistep.StateBag) multistep.StepAction {
return multistep.ActionHalt
}
ui.Say(fmt.Sprintf("Waiting for AMI (%s) in region (%s) to become ready...", resp.ImageId, region))
if err := WaitForAMI(regionconn, resp.ImageId); err != nil {
stateChange := StateChangeConf{
Conn: regionconn,
Pending: []string{"pending"},
Target: "available",
Refresh: AMIStateRefreshFunc(regionconn, resp.ImageId),
StepState: state,
}
ui.Say(fmt.Sprintf("Waiting for AMI (%s) in region (%s) to become ready...",
resp.ImageId, region))
if _, err := WaitForState(&stateChange); err != nil {
err := fmt.Errorf("Error waiting for AMI (%s) in region (%s): %s", resp.ImageId, region, err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
// Need to re-apply Tags since they are not copied with the AMI
if len(s.Tags) > 0 {
ui.Say(fmt.Sprintf("Adding tags to AMI (%s)...", resp.ImageId))
var ec2Tags []ec2.Tag
for key, value := range s.Tags {
ui.Message(fmt.Sprintf("Adding tag: \"%s\": \"%s\"", key, value))
ec2Tags = append(ec2Tags, ec2.Tag{key, value})
}
_, err := regionconn.CreateTags([]string{resp.ImageId}, ec2Tags)
if err != nil {
err := fmt.Errorf("Error adding tags to AMI (%s): %s", resp.ImageId, err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
}
amis[region] = resp.ImageId
}

View File

@ -2,6 +2,7 @@ package common
import (
"fmt"
"github.com/mitchellh/goamz/aws"
"github.com/mitchellh/goamz/ec2"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
@ -15,23 +16,25 @@ func (s *StepCreateTags) Run(state multistep.StateBag) multistep.StepAction {
ec2conn := state.Get("ec2").(*ec2.EC2)
ui := state.Get("ui").(packer.Ui)
amis := state.Get("amis").(map[string]string)
ami := amis[ec2conn.Region.Name]
if len(s.Tags) > 0 {
ui.Say(fmt.Sprintf("Adding tags to AMI (%s)...", ami))
for region, ami := range amis {
ui.Say(fmt.Sprintf("Adding tags to AMI (%s)...", ami))
var ec2Tags []ec2.Tag
for key, value := range s.Tags {
ui.Message(fmt.Sprintf("Adding tag: \"%s\": \"%s\"", key, value))
ec2Tags = append(ec2Tags, ec2.Tag{key, value})
}
var ec2Tags []ec2.Tag
for key, value := range s.Tags {
ui.Message(fmt.Sprintf("Adding tag: \"%s\": \"%s\"", key, value))
ec2Tags = append(ec2Tags, ec2.Tag{key, value})
}
_, err := ec2conn.CreateTags([]string{ami}, ec2Tags)
if err != nil {
err := fmt.Errorf("Error adding tags to AMI (%s): %s", ami, err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
regionconn := ec2.New(ec2conn.Auth, aws.Regions[region])
_, err := regionconn.CreateTags([]string{ami}, ec2Tags)
if err != nil {
err := fmt.Errorf("Error adding tags to AMI (%s): %s", ami, err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
}
}

View File

@ -1,13 +1,10 @@
package common
import (
"cgl.tideland.biz/identifier"
"encoding/hex"
"fmt"
"github.com/mitchellh/goamz/ec2"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"log"
"os"
"runtime"
)
@ -15,6 +12,7 @@ import (
type StepKeyPair struct {
Debug bool
DebugKeyPath string
KeyPairName string
keyName string
}
@ -23,20 +21,18 @@ func (s *StepKeyPair) Run(state multistep.StateBag) multistep.StepAction {
ec2conn := state.Get("ec2").(*ec2.EC2)
ui := state.Get("ui").(packer.Ui)
ui.Say("Creating temporary keypair for this instance...")
keyName := fmt.Sprintf("packer %s", hex.EncodeToString(identifier.NewUUID().Raw()))
log.Printf("temporary keypair name: %s", keyName)
keyResp, err := ec2conn.CreateKeyPair(keyName)
ui.Say(fmt.Sprintf("Creating temporary keypair: %s", s.KeyPairName))
keyResp, err := ec2conn.CreateKeyPair(s.KeyPairName)
if err != nil {
state.Put("error", fmt.Errorf("Error creating temporary keypair: %s", err))
return multistep.ActionHalt
}
// Set the keyname so we know to delete it later
s.keyName = keyName
s.keyName = s.KeyPairName
// Set some state data for use in future steps
state.Put("keyPair", keyName)
state.Put("keyPair", s.keyName)
state.Put("privateKey", keyResp.KeyMaterial)
// If we're in debug mode, output the private key to the working

View File

@ -2,6 +2,7 @@ package common
import (
"fmt"
"github.com/mitchellh/goamz/aws"
"github.com/mitchellh/goamz/ec2"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
@ -18,7 +19,6 @@ func (s *StepModifyAMIAttributes) Run(state multistep.StateBag) multistep.StepAc
ec2conn := state.Get("ec2").(*ec2.EC2)
ui := state.Get("ui").(packer.Ui)
amis := state.Get("amis").(map[string]string)
ami := amis[ec2conn.Region.Name]
// Determine if there is any work to do.
valid := false
@ -59,15 +59,18 @@ func (s *StepModifyAMIAttributes) Run(state multistep.StateBag) multistep.StepAc
}
}
ui.Say("Modifying AMI attributes...")
for name, opts := range options {
ui.Message(fmt.Sprintf("Modifying: %s", name))
_, err := ec2conn.ModifyImageAttribute(ami, opts)
if err != nil {
err := fmt.Errorf("Error modify AMI attributes: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
for region, ami := range amis {
ui.Say(fmt.Sprintf("Modifying attributes on AMI (%s)...", ami))
regionconn := ec2.New(ec2conn.Auth, aws.Regions[region])
for name, opts := range options {
ui.Message(fmt.Sprintf("Modifying: %s", name))
_, err := regionconn.ModifyImageAttribute(ami, opts)
if err != nil {
err := fmt.Errorf("Error modify AMI attributes: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
}
}

View File

@ -0,0 +1,38 @@
package common
import (
"bytes"
"text/template"
)
func isalphanumeric(b byte) bool {
if '0' <= b && b <= '9' {
return true
}
if 'a' <= b && b <= 'z' {
return true
}
if 'A' <= b && b <= 'Z' {
return true
}
return false
}
// Clean up AMI name by replacing invalid characters with "-"
func templateCleanAMIName(s string) string {
allowed := []byte{'(', ')', ',', '/', '-', '_'}
b := []byte(s)
newb := make([]byte, len(b))
for i, c := range b {
if isalphanumeric(c) || bytes.IndexByte(allowed, c) != -1 {
newb[i] = c
} else {
newb[i] = '-'
}
}
return string(newb[:])
}
var TemplateFuncs = template.FuncMap{
"clean_ami_name": templateCleanAMIName,
}

View File

@ -0,0 +1,16 @@
package common
import (
"testing"
)
func TestAMITemplatePrepare_clean(t *testing.T) {
origName := "AMZamz09(),/-_:&^$%"
expected := "AMZamz09(),/-_-----"
name := templateCleanAMIName(origName)
if name != expected {
t.Fatalf("template names do not match: expected %s got %s\n", expected, name)
}
}

View File

@ -44,6 +44,7 @@ func (b *Builder) Prepare(raws ...interface{}) error {
return err
}
b.config.tpl.UserVars = b.config.PackerUserVars
b.config.tpl.Funcs(awscommon.TemplateFuncs)
// Accumulate any errors
errs := common.CheckUnusedConfig(md)
@ -84,6 +85,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
&awscommon.StepKeyPair{
Debug: b.config.PackerDebug,
DebugKeyPath: fmt.Sprintf("ec2_%s.pem", b.config.PackerBuildName),
KeyPairName: b.config.TemporaryKeyPairName,
},
&awscommon.StepSecurityGroup{
SecurityGroupId: b.config.SecurityGroupId,
@ -109,15 +111,14 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
&common.StepProvision{},
&stepStopInstance{},
&stepCreateAMI{},
&awscommon.StepAMIRegionCopy{
Regions: b.config.AMIRegions,
},
&awscommon.StepModifyAMIAttributes{
Description: b.config.AMIDescription,
Users: b.config.AMIUsers,
Groups: b.config.AMIGroups,
},
&awscommon.StepAMIRegionCopy{
Regions: b.config.AMIRegions,
Tags: b.config.AMITags,
},
&awscommon.StepCreateTags{
Tags: b.config.AMITags,
},

View File

@ -39,8 +39,16 @@ func (s *stepCreateAMI) Run(state multistep.StateBag) multistep.StepAction {
state.Put("amis", amis)
// Wait for the image to become ready
stateChange := awscommon.StateChangeConf{
Conn: ec2conn,
Pending: []string{"pending"},
Target: "available",
Refresh: awscommon.AMIStateRefreshFunc(ec2conn, createResp.ImageId),
StepState: state,
}
ui.Say("Waiting for AMI to become ready...")
if err := awscommon.WaitForAMI(ec2conn, createResp.ImageId); err != nil {
if _, err := awscommon.WaitForState(&stateChange); err != nil {
err := fmt.Errorf("Error waiting for AMI: %s", err)
state.Put("error", err)
ui.Error(err.Error())

View File

@ -56,6 +56,7 @@ func (b *Builder) Prepare(raws ...interface{}) error {
return err
}
b.config.tpl.UserVars = b.config.PackerUserVars
b.config.tpl.Funcs(awscommon.TemplateFuncs)
if b.config.BundleDestination == "" {
b.config.BundleDestination = "/tmp"
@ -82,7 +83,7 @@ func (b *Builder) Prepare(raws ...interface{}) error {
"-u {{.AccountId}} " +
"-c {{.CertPath}} " +
"-r {{.Architecture}} " +
"-e {{.PrivatePath}} " +
"-e {{.PrivatePath}}/* " +
"-d {{.Destination}} " +
"-p {{.Prefix}} " +
"--batch"
@ -187,6 +188,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
&awscommon.StepKeyPair{
Debug: b.config.PackerDebug,
DebugKeyPath: fmt.Sprintf("ec2_%s.pem", b.config.PackerBuildName),
KeyPairName: b.config.TemporaryKeyPairName,
},
&awscommon.StepSecurityGroup{
SecurityGroupId: b.config.SecurityGroupId,
@ -214,16 +216,15 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
&StepBundleVolume{},
&StepUploadBundle{},
&StepRegisterAMI{},
&awscommon.StepAMIRegionCopy{
Regions: b.config.AMIRegions,
},
&awscommon.StepModifyAMIAttributes{
Description: b.config.AMIDescription,
Users: b.config.AMIUsers,
Groups: b.config.AMIGroups,
ProductCodes: b.config.AMIProductCodes,
},
&awscommon.StepAMIRegionCopy{
Regions: b.config.AMIRegions,
Tags: b.config.AMITags,
},
&awscommon.StepCreateTags{
Tags: b.config.AMITags,
},

View File

@ -37,8 +37,16 @@ func (s *StepRegisterAMI) Run(state multistep.StateBag) multistep.StepAction {
state.Put("amis", amis)
// Wait for the image to become ready
stateChange := awscommon.StateChangeConf{
Conn: ec2conn,
Pending: []string{"pending"},
Target: "available",
Refresh: awscommon.AMIStateRefreshFunc(ec2conn, registerResp.ImageId),
StepState: state,
}
ui.Say("Waiting for AMI to become ready...")
if err := awscommon.WaitForAMI(ec2conn, registerResp.ImageId); err != nil {
if _, err := awscommon.WaitForState(&stateChange); err != nil {
err := fmt.Errorf("Error waiting for AMI: %s", err)
state.Put("error", err)
ui.Error(err.Error())

View File

@ -14,6 +14,7 @@ import (
"net/http"
"net/url"
"strings"
"time"
)
const DIGITALOCEAN_API_URL = "https://api.digitalocean.com"
@ -191,46 +192,63 @@ func NewRequest(d DigitalOceanClient, path string, params url.Values) (map[strin
url := fmt.Sprintf("%s/%s?%s", DIGITALOCEAN_API_URL, path, params.Encode())
var decodedResponse map[string]interface{}
// Do some basic scrubbing so sensitive information doesn't appear in logs
scrubbedUrl := strings.Replace(url, d.ClientID, "CLIENT_ID", -1)
scrubbedUrl = strings.Replace(scrubbedUrl, d.APIKey, "API_KEY", -1)
log.Printf("sending new request to digitalocean: %s", scrubbedUrl)
resp, err := client.Get(url)
if err != nil {
return decodedResponse, err
}
body, err := ioutil.ReadAll(resp.Body)
resp.Body.Close()
if err != nil {
return decodedResponse, err
}
log.Printf("response from digitalocean: %s", body)
err = json.Unmarshal(body, &decodedResponse)
// Check for bad JSON
if err != nil {
err = errors.New(fmt.Sprintf("Failed to decode JSON response (HTTP %v) from DigitalOcean: %s",
resp.StatusCode, body))
return decodedResponse, err
}
// Check for errors sent by digitalocean
status := decodedResponse["status"]
if status != "OK" {
// Get the actual error message if there is one
if status == "ERROR" {
status = decodedResponse["error_message"]
var lastErr error
for attempts := 1; attempts < 10; attempts++ {
resp, err := client.Get(url)
if err != nil {
return nil, err
}
err = errors.New(fmt.Sprintf("Received bad response (HTTP %v) from DigitalOcean: %s", resp.StatusCode, status))
return decodedResponse, err
body, err := ioutil.ReadAll(resp.Body)
resp.Body.Close()
if err != nil {
return nil, err
}
log.Printf("response from digitalocean: %s", body)
var decodedResponse map[string]interface{}
err = json.Unmarshal(body, &decodedResponse)
if err != nil {
err = errors.New(fmt.Sprintf("Failed to decode JSON response (HTTP %v) from DigitalOcean: %s",
resp.StatusCode, body))
return decodedResponse, err
}
// Check for errors sent by digitalocean
status := decodedResponse["status"].(string)
if status == "OK" {
return decodedResponse, nil
}
if status == "ERROR" {
statusRaw, ok := decodedResponse["message"]
if ok {
status = statusRaw.(string)
} else {
status = fmt.Sprintf(
"Unknown error. Full response body: %s", body)
}
}
lastErr = errors.New(fmt.Sprintf("Received error from DigitalOcean (%d): %s",
resp.StatusCode, status))
log.Println(lastErr)
if strings.Contains(status, "a pending event") {
// Retry, DigitalOcean sends these dumb "pending event"
// errors all the time.
time.Sleep(5 * time.Second)
continue
}
// Some other kind of error. Just return.
return decodedResponse, lastErr
}
return decodedResponse, nil
return nil, lastErr
}

View File

@ -34,13 +34,11 @@ type config struct {
SSHPort uint `mapstructure:"ssh_port"`
RawSSHTimeout string `mapstructure:"ssh_timeout"`
RawEventDelay string `mapstructure:"event_delay"`
RawStateTimeout string `mapstructure:"state_timeout"`
// These are unexported since they're set by other fields
// being set.
sshTimeout time.Duration
eventDelay time.Duration
stateTimeout time.Duration
tpl *packer.ConfigTemplate
@ -113,12 +111,6 @@ func (b *Builder) Prepare(raws ...interface{}) error {
b.config.RawSSHTimeout = "1m"
}
if b.config.RawEventDelay == "" {
// Default to 5 second delays after creating events
// to allow DO to process
b.config.RawEventDelay = "5s"
}
if b.config.RawStateTimeout == "" {
// Default to 6 minute timeouts waiting for
// desired state. i.e waiting for droplet to become active
@ -131,7 +123,6 @@ func (b *Builder) Prepare(raws ...interface{}) error {
"snapshot_name": &b.config.SnapshotName,
"ssh_username": &b.config.SSHUsername,
"ssh_timeout": &b.config.RawSSHTimeout,
"event_delay": &b.config.RawEventDelay,
"state_timeout": &b.config.RawStateTimeout,
}
@ -162,13 +153,6 @@ func (b *Builder) Prepare(raws ...interface{}) error {
}
b.config.sshTimeout = sshTimeout
eventDelay, err := time.ParseDuration(b.config.RawEventDelay)
if err != nil {
errs = packer.MultiErrorAppend(
errs, fmt.Errorf("Failed parsing event_delay: %s", err))
}
b.config.eventDelay = eventDelay
stateTimeout, err := time.ParseDuration(b.config.RawStateTimeout)
if err != nil {
errs = packer.MultiErrorAppend(

View File

@ -258,38 +258,6 @@ func TestBuilderPrepare_SSHTimeout(t *testing.T) {
}
func TestBuilderPrepare_EventDelay(t *testing.T) {
var b Builder
config := testConfig()
// Test default
err := b.Prepare(config)
if err != nil {
t.Fatalf("should not have error: %s", err)
}
if b.config.RawEventDelay != "5s" {
t.Errorf("invalid: %d", b.config.RawEventDelay)
}
// Test set
config["event_delay"] = "10s"
b = Builder{}
err = b.Prepare(config)
if err != nil {
t.Fatalf("should not have error: %s", err)
}
// Test bad
config["event_delay"] = "tubes"
b = Builder{}
err = b.Prepare(config)
if err == nil {
t.Fatal("should have error")
}
}
func TestBuilderPrepare_StateTimeout(t *testing.T) {
var b Builder
config := testConfig()

View File

@ -6,8 +6,6 @@ import (
"fmt"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"log"
"time"
)
type stepCreateDroplet struct {
@ -56,22 +54,7 @@ func (s *stepCreateDroplet) Cleanup(state multistep.StateBag) {
// Destroy the droplet we just created
ui.Say("Destroying droplet...")
// Sleep arbitrarily before sending destroy request
// Otherwise we get "pending event" errors, even though there isn't
// one.
log.Printf("Sleeping for %v, event_delay", c.RawEventDelay)
time.Sleep(c.eventDelay)
var err error
for i := 0; i < 5; i++ {
err = client.DestroyDroplet(s.dropletId)
if err == nil {
break
}
time.Sleep(2 * time.Second)
}
err := client.DestroyDroplet(s.dropletId)
if err != nil {
curlstr := fmt.Sprintf("curl '%v/droplets/%v/destroy?client_id=%v&api_key=%v'",
DIGITALOCEAN_API_URL, s.dropletId, c.ClientID, c.APIKey)

View File

@ -38,8 +38,9 @@ func (s *stepCreateSSHKey) Run(state multistep.StateBag) multistep.StepAction {
state.Put("privateKey", string(pem.EncodeToMemory(&priv_blk)))
// Marshal the public key into SSH compatible format
pub := priv.PublicKey
pub_sshformat := string(ssh.MarshalAuthorizedKey(&pub))
// TODO properly handle the public key error
pub, _ := ssh.NewPublicKey(&priv.PublicKey)
pub_sshformat := string(ssh.MarshalAuthorizedKey(pub))
// The name of the public key on DO
name := fmt.Sprintf("packer-%s", hex.EncodeToString(identifier.NewUUID().Raw()))

View File

@ -16,7 +16,7 @@ func (s *stepDropletInfo) Run(state multistep.StateBag) multistep.StepAction {
ui.Say("Waiting for droplet to become active...")
err := waitForDropletState("active", dropletId, client, c)
err := waitForDropletState("active", dropletId, client, c.stateTimeout)
if err != nil {
err := fmt.Errorf("Error waiting for droplet to become active: %s", err)
state.Put("error", err)

View File

@ -5,7 +5,6 @@ import (
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"log"
"time"
)
type stepPowerOff struct{}
@ -16,15 +15,22 @@ func (s *stepPowerOff) Run(state multistep.StateBag) multistep.StepAction {
ui := state.Get("ui").(packer.Ui)
dropletId := state.Get("droplet_id").(uint)
// Sleep arbitrarily before sending power off request
// Otherwise we get "pending event" errors, even though there isn't
// one.
log.Printf("Sleeping for %v, event_delay", c.RawEventDelay)
time.Sleep(c.eventDelay)
_, status, err := client.DropletStatus(dropletId)
if err != nil {
err := fmt.Errorf("Error checking droplet state: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
// Poweroff the droplet so it can be snapshot
err := client.PowerOffDroplet(dropletId)
if status == "off" {
// Droplet is already off, don't do anything
return multistep.ActionContinue
}
// Pull the plug on the Droplet
ui.Say("Forcefully shutting down Droplet...")
err = client.PowerOffDroplet(dropletId)
if err != nil {
err := fmt.Errorf("Error powering off droplet: %s", err)
state.Put("error", err)
@ -33,13 +39,12 @@ func (s *stepPowerOff) Run(state multistep.StateBag) multistep.StepAction {
}
log.Println("Waiting for poweroff event to complete...")
// This arbitrary sleep is because we can't wait for the state
// of the droplet to be 'off', as stepShutdown should already
// have accomplished that, and the state indicator is the same.
// We just have to assume that this event will process quickly.
log.Printf("Sleeping for %v, event_delay", c.RawEventDelay)
time.Sleep(c.eventDelay)
err = waitForDropletState("off", dropletId, client, c.stateTimeout)
if err != nil {
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
return multistep.ActionContinue
}

View File

@ -12,33 +12,58 @@ type stepShutdown struct{}
func (s *stepShutdown) Run(state multistep.StateBag) multistep.StepAction {
client := state.Get("client").(*DigitalOceanClient)
c := state.Get("config").(config)
ui := state.Get("ui").(packer.Ui)
dropletId := state.Get("droplet_id").(uint)
// Sleep arbitrarily before sending the request
// Otherwise we get "pending event" errors, even though there isn't
// one.
log.Printf("Sleeping for %v, event_delay", c.RawEventDelay)
time.Sleep(c.eventDelay)
// Gracefully power off the droplet. We have to retry this a number
// of times because sometimes it says it completed when it actually
// did absolutely nothing (*ALAKAZAM!* magic!). We give up after
// a pretty arbitrary amount of time.
ui.Say("Gracefully shutting down droplet...")
err := client.ShutdownDroplet(dropletId)
if err != nil {
// If we get an error the first time, actually report it
err := fmt.Errorf("Error shutting down droplet: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
ui.Say("Waiting for droplet to shutdown...")
// A channel we use as a flag to end our goroutines
done := make(chan struct{})
shutdownRetryDone := make(chan struct{})
err = waitForDropletState("off", dropletId, client, c)
// Make sure we wait for the shutdown retry goroutine to end
// before moving on.
defer func() {
close(done)
<-shutdownRetryDone
}()
// Start a goroutine that just keeps trying to shut down the
// droplet.
go func() {
defer close(shutdownRetryDone)
for attempts := 2; attempts > 0; attempts++ {
log.Printf("ShutdownDroplet attempt #%d...", attempts)
err := client.ShutdownDroplet(dropletId)
if err != nil {
log.Printf("Shutdown retry error: %s", err)
}
select {
case <-done:
return
case <-time.After(20 * time.Second):
// Retry!
}
}
}()
err = waitForDropletState("off", dropletId, client, 2*time.Minute)
if err != nil {
err := fmt.Errorf("Error waiting for droplet to become 'off': %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
log.Printf("Error waiting for graceful off: %s", err)
}
return multistep.ActionContinue

View File

@ -26,7 +26,7 @@ func (s *stepSnapshot) Run(state multistep.StateBag) multistep.StepAction {
}
ui.Say("Waiting for snapshot to complete...")
err = waitForDropletState("active", dropletId, client, c)
err = waitForDropletState("active", dropletId, client, c.stateTimeout)
if err != nil {
err := fmt.Errorf("Error waiting for snapshot to complete: %s", err)
state.Put("error", err)

View File

@ -1,16 +1,18 @@
package digitalocean
import (
"errors"
"fmt"
"log"
"time"
)
// waitForState simply blocks until the droplet is in
// a state we expect, while eventually timing out.
func waitForDropletState(desiredState string, dropletId uint, client *DigitalOceanClient, c config) error {
active := make(chan bool, 1)
func waitForDropletState(desiredState string, dropletId uint, client *DigitalOceanClient, timeout time.Duration) error {
done := make(chan struct{})
defer close(done)
result := make(chan error, 1)
go func() {
attempts := 0
for {
@ -19,36 +21,35 @@ func waitForDropletState(desiredState string, dropletId uint, client *DigitalOce
log.Printf("Checking droplet status... (attempt: %d)", attempts)
_, status, err := client.DropletStatus(dropletId)
if err != nil {
log.Println(err)
break
result <- err
return
}
if status == desiredState {
break
result <- nil
return
}
// Wait 3 seconds in between
time.Sleep(3 * time.Second)
}
active <- true
// Verify we shouldn't exit
select {
case <-done:
// We finished, so just exit the goroutine
return
default:
// Keep going
}
}
}()
log.Printf("Waiting for up to %s for droplet to become %s", c.RawStateTimeout, desiredState)
timeout := time.After(c.stateTimeout)
ActiveWaitLoop:
for {
select {
case <-active:
// We connected. Just break the loop.
break ActiveWaitLoop
case <-timeout:
err := errors.New("Timeout while waiting to for droplet to become active")
return err
}
log.Printf("Waiting for up to %d seconds for droplet to become %s", timeout, desiredState)
select {
case err := <-result:
return err
case <-time.After(timeout):
err := fmt.Errorf("Timeout while waiting to for droplet to become '%s'", desiredState)
return err
}
// If we got this far, there were no errors
return nil
}

View File

@ -28,10 +28,12 @@ type config struct {
DiskSize uint `mapstructure:"disk_size"`
FloppyFiles []string `mapstructure:"floppy_files"`
Format string `mapstructure:"format"`
GuestAdditionsAttach bool `mapstructure:"guest_additions_attach"`
GuestAdditionsPath string `mapstructure:"guest_additions_path"`
GuestAdditionsURL string `mapstructure:"guest_additions_url"`
GuestAdditionsSHA256 string `mapstructure:"guest_additions_sha256"`
GuestOSType string `mapstructure:"guest_os_type"`
HardDriveInterface string `mapstructure:"hard_drive_interface"`
Headless bool `mapstructure:"headless"`
HTTPDir string `mapstructure:"http_directory"`
HTTPPortMin uint `mapstructure:"http_port_min"`
@ -89,6 +91,10 @@ func (b *Builder) Prepare(raws ...interface{}) error {
b.config.GuestAdditionsPath = "VBoxGuestAdditions.iso"
}
if b.config.HardDriveInterface == "" {
b.config.HardDriveInterface = "ide"
}
if b.config.GuestOSType == "" {
b.config.GuestOSType = "Other"
}
@ -141,6 +147,7 @@ func (b *Builder) Prepare(raws ...interface{}) error {
templates := map[string]*string{
"guest_additions_sha256": &b.config.GuestAdditionsSHA256,
"guest_os_type": &b.config.GuestOSType,
"hard_drive_interface": &b.config.HardDriveInterface,
"http_directory": &b.config.HTTPDir,
"iso_checksum": &b.config.ISOChecksum,
"iso_checksum_type": &b.config.ISOChecksumType,
@ -209,6 +216,11 @@ func (b *Builder) Prepare(raws ...interface{}) error {
errs, errors.New("invalid format, only 'ovf' or 'ova' are allowed"))
}
if b.config.HardDriveInterface != "ide" && b.config.HardDriveInterface != "sata" {
errs = packer.MultiErrorAppend(
errs, errors.New("hard_drive_interface can only be ide or sata"))
}
if b.config.HTTPPortMin > b.config.HTTPPortMax {
errs = packer.MultiErrorAppend(
errs, errors.New("http_port_min must be less than http_port_max"))
@ -350,6 +362,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
new(stepCreateVM),
new(stepCreateDisk),
new(stepAttachISO),
new(stepAttachGuestAdditions),
new(stepAttachFloppy),
new(stepForwardSSH),
new(stepVBoxManage),

View File

@ -252,6 +252,38 @@ func TestBuilderPrepare_GuestAdditionsURL(t *testing.T) {
}
}
func TestBuilderPrepare_HardDriveInterface(t *testing.T) {
var b Builder
config := testConfig()
// Test a default boot_wait
delete(config, "hard_drive_interface")
err := b.Prepare(config)
if err != nil {
t.Fatalf("err: %s", err)
}
if b.config.HardDriveInterface != "ide" {
t.Fatalf("bad: %s", b.config.HardDriveInterface)
}
// Test with a bad
config["hard_drive_interface"] = "fake"
b = Builder{}
err = b.Prepare(config)
if err == nil {
t.Fatal("should have error")
}
// Test with a good
config["hard_drive_interface"] = "sata"
b = Builder{}
err = b.Prepare(config)
if err != nil {
t.Fatalf("should not have error: %s", err)
}
}
func TestBuilderPrepare_HTTPPort(t *testing.T) {
var b Builder
config := testConfig()

View File

@ -135,9 +135,17 @@ func (d *VBox42Driver) Version() (string, error) {
versionOutput := strings.TrimSpace(stdout.String())
log.Printf("VBoxManage --version output: %s", versionOutput)
// If the "--version" output contains vboxdrv, then this is indicative
// of problems with the VirtualBox setup and we shouldn't really continue,
// whether or not we can read the version.
if strings.Contains(versionOutput, "vboxdrv") {
return "", fmt.Errorf("VirtualBox is not properly setup: %s", versionOutput)
}
versionRe := regexp.MustCompile("[^.0-9]")
matches := versionRe.Split(versionOutput, 2)
if len(matches) == 0 {
if len(matches) == 0 || matches[0] == "" {
return "", fmt.Errorf("No version found: %s", versionOutput)
}

View File

@ -0,0 +1,81 @@
package virtualbox
import (
"fmt"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"log"
)
// This step attaches the VirtualBox guest additions as a inserted CD onto
// the virtual machine.
//
// Uses:
// config *config
// driver Driver
// guest_additions_path string
// ui packer.Ui
// vmName string
//
// Produces:
type stepAttachGuestAdditions struct {
attachedPath string
}
func (s *stepAttachGuestAdditions) Run(state multistep.StateBag) multistep.StepAction {
config := state.Get("config").(*config)
driver := state.Get("driver").(Driver)
guestAdditionsPath := state.Get("guest_additions_path").(string)
ui := state.Get("ui").(packer.Ui)
vmName := state.Get("vmName").(string)
// If we're not attaching the guest additions then just return
if !config.GuestAdditionsAttach {
log.Println("Not attaching guest additions since we're uploading.")
return multistep.ActionContinue
}
// Attach the guest additions to the computer
log.Println("Attaching guest additions ISO onto IDE controller...")
command := []string{
"storageattach", vmName,
"--storagectl", "IDE Controller",
"--port", "1",
"--device", "0",
"--type", "dvddrive",
"--medium", guestAdditionsPath,
}
if err := driver.VBoxManage(command...); err != nil {
err := fmt.Errorf("Error attaching guest additions: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
// Track the path so that we can unregister it from VirtualBox later
s.attachedPath = guestAdditionsPath
return multistep.ActionContinue
}
func (s *stepAttachGuestAdditions) Cleanup(state multistep.StateBag) {
if s.attachedPath == "" {
return
}
driver := state.Get("driver").(Driver)
ui := state.Get("ui").(packer.Ui)
vmName := state.Get("vmName").(string)
command := []string{
"storageattach", vmName,
"--storagectl", "IDE Controller",
"--port", "1",
"--device", "0",
"--medium", "none",
}
if err := driver.VBoxManage(command...); err != nil {
ui.Error(fmt.Sprintf("Error unregistering guest additions: %s", err))
}
}

View File

@ -39,7 +39,9 @@ func (s *stepCreateDisk) Run(state multistep.StateBag) multistep.StepAction {
return multistep.ActionHalt
}
// Add the IDE controller so we can later attach the disk
// Add the IDE controller so we can later attach the disk.
// When the hard disk controller is not IDE, this device is still used
// by VirtualBox to deliver the guest extensions.
controllerName := "IDE Controller"
err = driver.VBoxManage("storagectl", vmName, "--name", controllerName, "--add", "ide")
if err != nil {
@ -49,6 +51,25 @@ func (s *stepCreateDisk) Run(state multistep.StateBag) multistep.StepAction {
return multistep.ActionHalt
}
// Add a SATA controller if we were asked to use SATA. We still attach
// the IDE controller above because some other things (disks) require
// that.
if config.HardDriveInterface == "sata" {
controllerName = "SATA Controller"
command = []string{
"storagectl", vmName,
"--name", controllerName,
"--add", "sata",
"--sataportcount", "1",
}
if err := driver.VBoxManage(command...); err != nil {
err := fmt.Errorf("Error creating disk controller: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
}
// Attach the disk to the controller
command = []string{
"storageattach", vmName,

View File

@ -50,7 +50,7 @@ func (s *stepExport) Run(state multistep.StateBag) multistep.StepAction {
}
// Export the VM to an OVF
outputPath := filepath.Join(config.OutputDir, "packer."+config.Format)
outputPath := filepath.Join(config.OutputDir, vmName+"."+config.Format)
command = []string{
"export",

View File

@ -92,29 +92,34 @@ func (s *stepTypeBootCommand) Run(state multistep.StateBag) multistep.StepAction
func (*stepTypeBootCommand) Cleanup(multistep.StateBag) {}
func scancodes(message string) []string {
// Scancodes reference: http://www.win.tue.nl/~aeb/linux/kbd/scancodes-1.html
//
// Scancodes represent raw keyboard output and are fed to the VM by the
// VBoxManage controlvm keyboardputscancode program.
//
// Scancodes are recorded here in pairs. The first entry represents
// the key press and the second entry represents the key release and is
// derived from the first by the addition of 0x80.
special := make(map[string][]string)
special["<bs>"] = []string{"ff", "08"}
special["<del>"] = []string{"ff", "ff"}
special["<bs>"] = []string{"0e", "8e"}
special["<del>"] = []string{"53", "d3"}
special["<enter>"] = []string{"1c", "9c"}
special["<esc>"] = []string{"01", "81"}
special["<f1>"] = []string{"ff", "be"}
special["<f2>"] = []string{"ff", "bf"}
special["<f3>"] = []string{"ff", "c0"}
special["<f4>"] = []string{"ff", "c1"}
special["<f5>"] = []string{"ff", "c2"}
special["<f6>"] = []string{"ff", "c3"}
special["<f7>"] = []string{"ff", "c4"}
special["<f8>"] = []string{"ff", "c5"}
special["<f9>"] = []string{"ff", "c6"}
special["<f10>"] = []string{"ff", "c7"}
special["<f11>"] = []string{"ff", "c8"}
special["<f12>"] = []string{"ff", "c9"}
special["<f1>"] = []string{"3b", "bb"}
special["<f2>"] = []string{"3c", "bc"}
special["<f3>"] = []string{"3d", "bd"}
special["<f4>"] = []string{"3e", "be"}
special["<f5>"] = []string{"3f", "bf"}
special["<f6>"] = []string{"40", "c0"}
special["<f7>"] = []string{"41", "c1"}
special["<f8>"] = []string{"42", "c2"}
special["<f9>"] = []string{"43", "c3"}
special["<f10>"] = []string{"44", "c4"}
special["<return>"] = []string{"1c", "9c"}
special["<tab>"] = []string{"0f", "8f"}
shiftedChars := "~!@#$%^&*()_+{}|:\"<>?"
// Scancodes reference: http://www.win.tue.nl/~aeb/linux/kbd/scancodes-1.html
scancodeIndex := make(map[string]uint)
scancodeIndex["1234567890-="] = 0x02
scancodeIndex["!@#$%^&*()_+"] = 0x02

View File

@ -4,6 +4,7 @@ import (
"fmt"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"log"
"os"
)
@ -21,6 +22,12 @@ func (s *stepUploadGuestAdditions) Run(state multistep.StateBag) multistep.StepA
guestAdditionsPath := state.Get("guest_additions_path").(string)
ui := state.Get("ui").(packer.Ui)
// If we're attaching then don't do this, since we attached.
if config.GuestAdditionsAttach {
log.Println("Not uploading guest additions since we're attaching.")
return multistep.ActionContinue
}
version, err := driver.Version()
if err != nil {
state.Put("error", fmt.Errorf("Error reading version for guest additions upload: %s", err))

View File

@ -88,6 +88,9 @@ func (stepCreateVMX) Run(state multistep.StateBag) multistep.StepAction {
vmxData["floppy0.fileName"] = floppyPathRaw.(string)
}
// Set this so that no dialogs ever appear from Packer.
vmxData["msg.autoAnswer"] = "true"
vmxPath := filepath.Join(config.OutputDir, config.VMName+".vmx")
if err := WriteVMX(vmxPath, vmxData); err != nil {
err := fmt.Errorf("Error creating VMX file: %s", err)

View File

@ -41,8 +41,9 @@ func CheckUnusedConfig(md *mapstructure.Metadata) *packer.MultiError {
func DecodeConfig(target interface{}, raws ...interface{}) (*mapstructure.Metadata, error) {
var md mapstructure.Metadata
decoderConfig := &mapstructure.DecoderConfig{
Metadata: &md,
Result: target,
Metadata: &md,
Result: target,
WeaklyTypedInput: true,
}
decoder, err := mapstructure.NewDecoder(decoderConfig)

View File

@ -72,7 +72,9 @@ WaitLoop:
state.Put("communicator", comm)
break WaitLoop
case <-timeout:
ui.Error("Timeout waiting for SSH.")
err := fmt.Errorf("Timeout waiting for SSH.")
state.Put("error", err)
ui.Error(err.Error())
close(cancel)
return multistep.ActionHalt
case <-time.After(1 * time.Second):

View File

@ -285,14 +285,16 @@ func (c *comm) scpSession(scpCommand string, f func(io.Writer, *bufio.Reader) er
// Start the sink mode on the other side
// TODO(mitchellh): There are probably issues with shell escaping the path
log.Println("Starting remote scp process: %s", scpCommand)
log.Println("Starting remote scp process: ", scpCommand)
if err := session.Start(scpCommand); err != nil {
return err
}
// Call our callback that executes in the context of SCP
// Call our callback that executes in the context of SCP. We ignore
// EOF errors if they occur because it usually means that SCP prematurely
// ended on the other side.
log.Println("Started SCP session, beginning transfers...")
if err := f(stdinW, stdoutR); err != nil {
if err := f(stdinW, stdoutR); err != nil && err != io.EOF {
return err
}
@ -406,8 +408,27 @@ func scpUploadDir(root string, fs []os.FileInfo, w io.Writer, r *bufio.Reader) e
for _, fi := range fs {
realPath := filepath.Join(root, fi.Name())
if !fi.IsDir() {
// It is a regular file, just upload it
// Track if this is actually a symlink to a directory. If it is
// a symlink to a file we don't do any special behavior because uploading
// a file just works. If it is a directory, we need to know so we
// treat it as such.
isSymlinkToDir := false
if fi.Mode()&os.ModeSymlink == os.ModeSymlink {
symPath, err := filepath.EvalSymlinks(realPath)
if err != nil {
return err
}
symFi, err := os.Lstat(symPath)
if err != nil {
return err
}
isSymlinkToDir = symFi.IsDir()
}
if !fi.IsDir() && !isSymlinkToDir {
// It is a regular file (or symlink to a file), just upload it
f, err := os.Open(realPath)
if err != nil {
return err

View File

@ -1,6 +1,7 @@
package ssh
import (
"code.google.com/p/go.crypto/ssh"
"crypto"
"crypto/dsa"
"crypto/rsa"
@ -53,15 +54,15 @@ func (k *SimpleKeychain) AddPEMKeyPassword(key string, password string) (err err
}
// Key method for ssh.ClientKeyring interface
func (k *SimpleKeychain) Key(i int) (interface{}, error) {
func (k *SimpleKeychain) Key(i int) (ssh.PublicKey, error) {
if i < 0 || i >= len(k.keys) {
return nil, nil
}
switch key := k.keys[i].(type) {
case *rsa.PrivateKey:
return &key.PublicKey, nil
return ssh.NewPublicKey(&key.PublicKey)
case *dsa.PrivateKey:
return &key.PublicKey, nil
return ssh.NewPublicKey(&key.PublicKey)
}
panic("unknown key type")
}

View File

@ -43,6 +43,7 @@ const defaultConfig = `
"provisioners": {
"chef-solo": "packer-provisioner-chef-solo",
"file": "packer-provisioner-file",
"puppet-masterless": "packer-provisioner-puppet-masterless",
"shell": "packer-provisioner-shell",
"salt-masterless": "packer-provisioner-salt-masterless"
}

View File

@ -67,7 +67,7 @@ type Communicator interface {
// is a trailing slash on the source "/". For example: "/tmp/src" as
// the source will create a "src" directory in the destination unless
// a trailing slash is added. This is identical behavior to rsync(1).
UploadDir(string, string, []string) error
UploadDir(dst string, src string, exclude []string) error
// Download downloads a file from the machine from the given remote path
// with the contents writing to the given writer. This method will

View File

@ -2,6 +2,8 @@ package packer
import (
"bytes"
"cgl.tideland.biz/identifier"
"encoding/hex"
"fmt"
"strconv"
"text/template"
@ -26,8 +28,10 @@ func NewConfigTemplate() (*ConfigTemplate, error) {
result.root = template.New("configTemplateRoot")
result.root.Funcs(template.FuncMap{
"isotime": templateISOTime,
"timestamp": templateTimestamp,
"user": result.templateUser,
"uuid": templateUuid,
})
return result, nil
@ -59,6 +63,11 @@ func (t *ConfigTemplate) Validate(s string) error {
return err
}
// Add additional functions to the template
func (t *ConfigTemplate) Funcs(funcs template.FuncMap) {
t.root.Funcs(funcs)
}
func (t *ConfigTemplate) nextTemplateName() string {
name := fmt.Sprintf("tpl%d", t.i)
t.i++
@ -76,6 +85,14 @@ func (t *ConfigTemplate) templateUser(n string) (string, error) {
return result, nil
}
func templateISOTime() string {
return time.Now().UTC().Format(time.RFC3339)
}
func templateTimestamp() string {
return strconv.FormatInt(time.Now().UTC().Unix(), 10)
}
func templateUuid() string {
return hex.EncodeToString(identifier.NewUUID().Raw())
}

View File

@ -7,6 +7,28 @@ import (
"time"
)
func TestConfigTemplateProcess_isotime(t *testing.T) {
tpl, err := NewConfigTemplate()
if err != nil {
t.Fatalf("err: %s", err)
}
result, err := tpl.Process(`{{isotime}}`, nil)
if err != nil {
t.Fatalf("err: %s", err)
}
val, err := time.Parse(time.RFC3339, result)
if err != nil {
t.Fatalf("err: %s", err)
}
currentTime := time.Now().UTC()
if currentTime.Sub(val) > 2*time.Second {
t.Fatalf("val: %d (current: %d)", val, currentTime)
}
}
func TestConfigTemplateProcess_timestamp(t *testing.T) {
tpl, err := NewConfigTemplate()
if err != nil {
@ -47,6 +69,22 @@ func TestConfigTemplateProcess_user(t *testing.T) {
}
}
func TestConfigTemplateProcess_uuid(t *testing.T) {
tpl, err := NewConfigTemplate()
if err != nil {
t.Fatalf("err: %s", err)
}
result, err := tpl.Process(`{{uuid}}`, nil)
if err != nil {
t.Fatalf("err: %s", err)
}
if len(result) != 32 {
t.Fatalf("err: %s", result)
}
}
func TestConfigTemplateValidate(t *testing.T) {
tpl, err := NewConfigTemplate()
if err != nil {

View File

@ -317,10 +317,24 @@ func (c *Client) Start() (address string, err error) {
err = errors.New("timeout while waiting for plugin to start")
case <-exitCh:
err = errors.New("plugin exited before we could connect")
case line := <-linesCh:
// Trim the address and reset the err since we were able
// to read some sort of address.
c.address = strings.TrimSpace(string(line))
case lineBytes := <-linesCh:
// Trim the line and split by "|" in order to get the parts of
// the output.
line := strings.TrimSpace(string(lineBytes))
parts := strings.SplitN(line, "|", 2)
if len(parts) < 2 {
err = fmt.Errorf("Unrecognized remote plugin message: %s", line)
return
}
// Test the API version
if parts[0] != APIVersion {
err = fmt.Errorf("Incompatible API version with plugin. "+
"Plugin version: %s, Ours: %s", parts[0], APIVersion)
return
}
c.address = parts[1]
address = c.address
}

View File

@ -37,6 +37,21 @@ func TestClient(t *testing.T) {
}
}
func TestClientStart_badVersion(t *testing.T) {
config := &ClientConfig{
Cmd: helperProcess("bad-version"),
StartTimeout: 50 * time.Millisecond,
}
c := NewClient(config)
defer c.Kill()
_, err := c.Start()
if err == nil {
t.Fatal("err should not be nil")
}
}
func TestClient_Start_Timeout(t *testing.T) {
config := &ClientConfig{
Cmd: helperProcess("start-timeout"),

View File

@ -30,9 +30,16 @@ var Interrupts int32 = 0
const MagicCookieKey = "PACKER_PLUGIN_MAGIC_COOKIE"
const MagicCookieValue = "d602bf8f470bc67ca7faa0386276bbdd4330efaf76d1a219cb4d6991ca9872b2"
// The APIVersion is outputted along with the RPC address. The plugin
// client validates this API version and will show an error if it doesn't
// know how to speak it.
const APIVersion = "1"
// This serves a single RPC connection on the given RPC server on
// a random port.
func serve(server *rpc.Server) (err error) {
log.Printf("Plugin build against Packer '%s'", packer.GitCommit)
if os.Getenv(MagicCookieKey) != MagicCookieValue {
return errors.New("Please do not execute plugins directly. Packer will execute these for you.")
}
@ -75,7 +82,7 @@ func serve(server *rpc.Server) (err error) {
// Output the address to stdout
log.Printf("Plugin address: %s\n", address)
fmt.Println(address)
fmt.Printf("%s|%s\n", APIVersion, address)
os.Stdout.Sync()
// Accept a connection

View File

@ -50,6 +50,9 @@ func TestHelperProcess(*testing.T) {
cmd, args := args[0], args[1:]
switch cmd {
case "bad-version":
fmt.Printf("%s1|:1234\n", APIVersion)
<-make(chan int)
case "builder":
ServeBuilder(new(helperBuilder))
case "command":
@ -59,7 +62,7 @@ func TestHelperProcess(*testing.T) {
case "invalid-rpc-address":
fmt.Println("lolinvalid")
case "mock":
fmt.Println(":1234")
fmt.Printf("%s|:1234\n", APIVersion)
<-make(chan int)
case "post-processor":
ServePostProcessor(new(helperPostProcessor))
@ -69,11 +72,11 @@ func TestHelperProcess(*testing.T) {
time.Sleep(1 * time.Minute)
os.Exit(1)
case "stderr":
fmt.Println(":1234")
fmt.Printf("%s|:1234\n", APIVersion)
log.Println("HELLO")
log.Println("WORLD")
case "stdin":
fmt.Println(":1234")
fmt.Printf("%s|:1234\n", APIVersion)
data := make([]byte, 5)
if _, err := os.Stdin.Read(data); err != nil {
log.Printf("stdin read error: %s", err)

View File

@ -52,7 +52,7 @@ func (b *build) Run(ui packer.Ui, cache packer.Cache) ([]packer.Artifact, error)
artifacts := make([]packer.Artifact, len(result))
for i, addr := range result {
client, err := rpc.Dial("tcp", addr)
client, err := rpcDial(addr)
if err != nil {
return nil, err
}
@ -92,7 +92,7 @@ func (b *BuildServer) Prepare(v map[string]string, reply *error) error {
}
func (b *BuildServer) Run(args *BuildRunArgs, reply *[]string) error {
client, err := rpc.Dial("tcp", args.UiRPCAddress)
client, err := rpcDial(args.UiRPCAddress)
if err != nil {
return err
}

View File

@ -5,7 +5,6 @@ import (
"fmt"
"github.com/mitchellh/packer/packer"
"log"
"net"
"net/rpc"
)
@ -95,7 +94,7 @@ func (b *builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
return nil, nil
}
client, err := rpc.Dial("tcp", response.RPCAddress)
client, err := rpcDial(response.RPCAddress)
if err != nil {
return nil, err
}
@ -119,12 +118,12 @@ func (b *BuilderServer) Prepare(args *BuilderPrepareArgs, reply *error) error {
}
func (b *BuilderServer) Run(args *BuilderRunArgs, reply *interface{}) error {
client, err := rpc.Dial("tcp", args.RPCAddress)
client, err := rpcDial(args.RPCAddress)
if err != nil {
return err
}
responseC, err := net.Dial("tcp", args.ResponseAddress)
responseC, err := tcpDial(args.ResponseAddress)
if err != nil {
return err
}

View File

@ -66,7 +66,7 @@ func (c *CommandServer) Help(args *interface{}, reply *string) error {
}
func (c *CommandServer) Run(args *CommandRunArgs, reply *int) error {
client, err := rpc.Dial("tcp", args.RPCAddress)
client, err := rpcDial(args.RPCAddress)
if err != nil {
return err
}

View File

@ -177,7 +177,7 @@ func (c *CommunicatorServer) Start(args *CommunicatorStartArgs, reply *interface
toClose := make([]net.Conn, 0)
if args.StdinAddress != "" {
stdinC, err := net.Dial("tcp", args.StdinAddress)
stdinC, err := tcpDial(args.StdinAddress)
if err != nil {
return err
}
@ -187,7 +187,7 @@ func (c *CommunicatorServer) Start(args *CommunicatorStartArgs, reply *interface
}
if args.StdoutAddress != "" {
stdoutC, err := net.Dial("tcp", args.StdoutAddress)
stdoutC, err := tcpDial(args.StdoutAddress)
if err != nil {
return err
}
@ -197,7 +197,7 @@ func (c *CommunicatorServer) Start(args *CommunicatorStartArgs, reply *interface
}
if args.StderrAddress != "" {
stderrC, err := net.Dial("tcp", args.StderrAddress)
stderrC, err := tcpDial(args.StderrAddress)
if err != nil {
return err
}
@ -208,7 +208,7 @@ func (c *CommunicatorServer) Start(args *CommunicatorStartArgs, reply *interface
// Connect to the response address so we can write our result to it
// when ready.
responseC, err := net.Dial("tcp", args.ResponseAddress)
responseC, err := tcpDial(args.ResponseAddress)
if err != nil {
return err
}
@ -234,7 +234,7 @@ func (c *CommunicatorServer) Start(args *CommunicatorStartArgs, reply *interface
}
func (c *CommunicatorServer) Upload(args *CommunicatorUploadArgs, reply *interface{}) (err error) {
readerC, err := net.Dial("tcp", args.ReaderAddress)
readerC, err := tcpDial(args.ReaderAddress)
if err != nil {
return
}
@ -250,7 +250,7 @@ func (c *CommunicatorServer) UploadDir(args *CommunicatorUploadDirArgs, reply *e
}
func (c *CommunicatorServer) Download(args *CommunicatorDownloadArgs, reply *interface{}) (err error) {
writerC, err := net.Dial("tcp", args.WriterAddress)
writerC, err := tcpDial(args.WriterAddress)
if err != nil {
return
}

33
packer/rpc/dial.go Normal file
View File

@ -0,0 +1,33 @@
package rpc
import (
"net"
"net/rpc"
)
// rpcDial makes a TCP connection to a remote RPC server and returns
// the client. This will set the connection up properly so that keep-alives
// are set and so on and should be used to make all RPC connections within
// this package.
func rpcDial(address string) (*rpc.Client, error) {
tcpConn, err := tcpDial(address)
if err != nil {
return nil, err
}
// Create an RPC client around our connection
return rpc.NewClient(tcpConn), nil
}
// tcpDial connects via TCP to the designated address.
func tcpDial(address string) (*net.TCPConn, error) {
conn, err := net.Dial("tcp", address)
if err != nil {
return nil, err
}
// Set a keep-alive so that the connection stays alive even when idle
tcpConn := conn.(*net.TCPConn)
tcpConn.SetKeepAlive(true)
return tcpConn, nil
}

View File

@ -28,7 +28,7 @@ func (e *Environment) Builder(name string) (b packer.Builder, err error) {
return
}
client, err := rpc.Dial("tcp", reply)
client, err := rpcDial(reply)
if err != nil {
return
}
@ -43,7 +43,7 @@ func (e *Environment) Cache() packer.Cache {
panic(err)
}
client, err := rpc.Dial("tcp", reply)
client, err := rpcDial(reply)
if err != nil {
panic(err)
}
@ -64,7 +64,7 @@ func (e *Environment) Hook(name string) (h packer.Hook, err error) {
return
}
client, err := rpc.Dial("tcp", reply)
client, err := rpcDial(reply)
if err != nil {
return
}
@ -80,7 +80,7 @@ func (e *Environment) PostProcessor(name string) (p packer.PostProcessor, err er
return
}
client, err := rpc.Dial("tcp", reply)
client, err := rpcDial(reply)
if err != nil {
return
}
@ -96,7 +96,7 @@ func (e *Environment) Provisioner(name string) (p packer.Provisioner, err error)
return
}
client, err := rpc.Dial("tcp", reply)
client, err := rpcDial(reply)
if err != nil {
return
}
@ -109,7 +109,7 @@ func (e *Environment) Ui() packer.Ui {
var reply string
e.client.Call("Environment.Ui", new(interface{}), &reply)
client, err := rpc.Dial("tcp", reply)
client, err := rpcDial(reply)
if err != nil {
panic(err)
}

View File

@ -46,7 +46,7 @@ func (h *hook) Cancel() {
}
func (h *HookServer) Run(args *HookRunArgs, reply *interface{}) error {
client, err := rpc.Dial("tcp", args.RPCAddress)
client, err := rpcDial(args.RPCAddress)
if err != nil {
return err
}

View File

@ -57,7 +57,7 @@ func (p *postProcessor) PostProcess(ui packer.Ui, a packer.Artifact) (packer.Art
return nil, false, nil
}
client, err := rpc.Dial("tcp", response.RPCAddress)
client, err := rpcDial(response.RPCAddress)
if err != nil {
return nil, false, err
}
@ -75,7 +75,7 @@ func (p *PostProcessorServer) Configure(args *PostProcessorConfigureArgs, reply
}
func (p *PostProcessorServer) PostProcess(address string, reply *PostProcessorProcessResponse) error {
client, err := rpc.Dial("tcp", address)
client, err := rpcDial(address)
if err != nil {
return err
}

View File

@ -65,7 +65,7 @@ func (p *ProvisionerServer) Prepare(args *ProvisionerPrepareArgs, reply *error)
}
func (p *ProvisionerServer) Provision(args *ProvisionerProvisionArgs, reply *interface{}) error {
client, err := rpc.Dial("tcp", args.RPCAddress)
client, err := rpcDial(args.RPCAddress)
if err != nil {
return err
}

View File

@ -48,6 +48,8 @@ type RawBuilderConfig struct {
// configuration. It contains the type of the post processor as well as the
// raw configuration that is handed to the post-processor for it to process.
type RawPostProcessorConfig struct {
TemplateOnlyExcept `mapstructure:",squash"`
Type string
KeepInputArtifact bool `mapstructure:"keep_input_artifact"`
RawConfig map[string]interface{}
@ -57,6 +59,8 @@ type RawPostProcessorConfig struct {
// It contains the type of the provisioner as well as the raw configuration
// that is handed to the provisioner for it to process.
type RawProvisionerConfig struct {
TemplateOnlyExcept `mapstructure:",squash"`
Type string
Override map[string]interface{}
@ -120,18 +124,25 @@ func ParseTemplate(data []byte) (t *Template, err error) {
// Gather all the variables
for k, v := range rawTpl.Variables {
var variable RawVariable
variable.Default = ""
variable.Required = v == nil
if v != nil {
def, ok := v.(string)
if !ok {
errors = append(errors,
fmt.Errorf("variable '%s': default value must be string or null", k))
continue
}
// Create a new mapstructure decoder in order to decode the default
// value since this is the only value in the regular template that
// can be weakly typed.
decoder, err := mapstructure.NewDecoder(&mapstructure.DecoderConfig{
Result: &variable.Default,
WeaklyTypedInput: true,
})
if err != nil {
// This should never happen.
panic(err)
}
variable.Default = def
err = decoder.Decode(v)
if err != nil {
errors = append(errors,
fmt.Errorf("Error decoding default value for user var '%s': %s", k, err))
continue
}
t.Variables[k] = variable
@ -189,32 +200,50 @@ func ParseTemplate(data []byte) (t *Template, err error) {
continue
}
t.PostProcessors[i] = make([]RawPostProcessorConfig, len(rawPP))
configs := t.PostProcessors[i]
configs := make([]RawPostProcessorConfig, 0, len(rawPP))
for j, pp := range rawPP {
config := &configs[j]
if err := mapstructure.Decode(pp, config); err != nil {
var config RawPostProcessorConfig
if err := mapstructure.Decode(pp, &config); err != nil {
if merr, ok := err.(*mapstructure.Error); ok {
for _, err := range merr.Errors {
errors = append(errors, fmt.Errorf("Post-processor #%d.%d: %s", i+1, j+1, err))
errors = append(errors,
fmt.Errorf("Post-processor #%d.%d: %s", i+1, j+1, err))
}
} else {
errors = append(errors, fmt.Errorf("Post-processor %d.%d: %s", i+1, j+1, err))
errors = append(errors,
fmt.Errorf("Post-processor %d.%d: %s", i+1, j+1, err))
}
continue
}
if config.Type == "" {
errors = append(errors, fmt.Errorf("Post-processor %d.%d: missing 'type'", i+1, j+1))
errors = append(errors,
fmt.Errorf("Post-processor %d.%d: missing 'type'", i+1, j+1))
continue
}
// Remove the input keep_input_artifact option
config.TemplateOnlyExcept.Prune(pp)
delete(pp, "keep_input_artifact")
// Verify that the only settings are good
if errs := config.TemplateOnlyExcept.Validate(t.Builders); len(errs) > 0 {
for _, err := range errs {
errors = append(errors,
fmt.Errorf("Post-processor %d.%d: %s", i+1, j+1, err))
}
continue
}
config.RawConfig = pp
// Add it to the list of configs
configs = append(configs, config)
}
t.PostProcessors[i] = configs
}
// Gather all the provisioners
@ -237,9 +266,8 @@ func ParseTemplate(data []byte) (t *Template, err error) {
continue
}
// The provisioners not only don't need or want the override settings
// (as they are processed as part of the preparation below), but will
// actively reject them as invalid configuration.
// Delete the keys that we used
raw.TemplateOnlyExcept.Prune(v)
delete(v, "override")
// Verify that the override keys exist...
@ -250,6 +278,14 @@ func ParseTemplate(data []byte) (t *Template, err error) {
}
}
// Verify that the only settings are good
if errs := raw.TemplateOnlyExcept.Validate(t.Builders); len(errs) > 0 {
for _, err := range errs {
errors = append(errors,
fmt.Errorf("provisioner %d: %s", i+1, err))
}
}
raw.RawConfig = v
}
@ -400,8 +436,12 @@ func (t *Template) Build(name string, components *ComponentFinder) (b Build, err
// Prepare the post-processors
postProcessors := make([][]coreBuildPostProcessor, 0, len(t.PostProcessors))
for _, rawPPs := range t.PostProcessors {
current := make([]coreBuildPostProcessor, len(rawPPs))
for i, rawPP := range rawPPs {
current := make([]coreBuildPostProcessor, 0, len(rawPPs))
for _, rawPP := range rawPPs {
if rawPP.TemplateOnlyExcept.Skip(name) {
continue
}
pp, err := components.PostProcessor(rawPP.Type)
if err != nil {
return nil, err
@ -411,12 +451,18 @@ func (t *Template) Build(name string, components *ComponentFinder) (b Build, err
return nil, fmt.Errorf("PostProcessor type not found: %s", rawPP.Type)
}
current[i] = coreBuildPostProcessor{
current = append(current, coreBuildPostProcessor{
processor: pp,
processorType: rawPP.Type,
config: rawPP.RawConfig,
keepInputArtifact: rawPP.KeepInputArtifact,
}
})
}
// If we have no post-processors in this chain, just continue.
// This can happen if the post-processors skip certain builds.
if len(current) == 0 {
continue
}
postProcessors = append(postProcessors, current)
@ -425,6 +471,10 @@ func (t *Template) Build(name string, components *ComponentFinder) (b Build, err
// Prepare the provisioners
provisioners := make([]coreBuildProvisioner, 0, len(t.Provisioners))
for _, rawProvisioner := range t.Provisioners {
if rawProvisioner.TemplateOnlyExcept.Skip(name) {
continue
}
var provisioner Provisioner
provisioner, err = components.Provisioner(rawProvisioner.Type)
if err != nil {
@ -471,3 +521,69 @@ func (t *Template) Build(name string, components *ComponentFinder) (b Build, err
return
}
// TemplateOnlyExcept contains the logic required for "only" and "except"
// meta-parameters.
type TemplateOnlyExcept struct {
Only []string
Except []string
}
// Prune will prune out the used values from the raw map.
func (t *TemplateOnlyExcept) Prune(raw map[string]interface{}) {
delete(raw, "except")
delete(raw, "only")
}
// Skip tests if we should skip putting this item onto a build.
func (t *TemplateOnlyExcept) Skip(name string) bool {
if len(t.Only) > 0 {
onlyFound := false
for _, n := range t.Only {
if n == name {
onlyFound = true
break
}
}
if !onlyFound {
// Skip this provisioner
return true
}
}
// If the name is in the except list, then skip that
for _, n := range t.Except {
if n == name {
return true
}
}
return false
}
// Validates the only/except parameters.
func (t *TemplateOnlyExcept) Validate(b map[string]RawBuilderConfig) (e []error) {
if len(t.Only) > 0 && len(t.Except) > 0 {
e = append(e,
fmt.Errorf("Only one of 'only' or 'except' may be specified."))
}
if len(t.Only) > 0 {
for _, n := range t.Only {
if _, ok := b[n]; !ok {
e = append(e,
fmt.Errorf("'only' specified builder '%s' not found", n))
}
}
}
for _, n := range t.Except {
if _, ok := b[n]; !ok {
e = append(e,
fmt.Errorf("'except' specified builder '%s' not found", n))
}
}
return
}

View File

@ -9,6 +9,33 @@ import (
"testing"
)
func testTemplateComponentFinder() *ComponentFinder {
builder := testBuilder()
pp := new(TestPostProcessor)
provisioner := &MockProvisioner{}
builderMap := map[string]Builder{
"test-builder": builder,
}
ppMap := map[string]PostProcessor{
"test-pp": pp,
}
provisionerMap := map[string]Provisioner{
"test-prov": provisioner,
}
builderFactory := func(n string) (Builder, error) { return builderMap[n], nil }
ppFactory := func(n string) (PostProcessor, error) { return ppMap[n], nil }
provFactory := func(n string) (Provisioner, error) { return provisionerMap[n], nil }
return &ComponentFinder{
Builder: builderFactory,
PostProcessor: ppFactory,
Provisioner: provFactory,
}
}
func TestParseTemplateFile_basic(t *testing.T) {
data := `
{
@ -364,7 +391,8 @@ func TestParseTemplate_Variables(t *testing.T) {
{
"variables": {
"foo": "bar",
"bar": null
"bar": null,
"baz": 27
},
"builders": [{"type": "something"}]
@ -376,7 +404,7 @@ func TestParseTemplate_Variables(t *testing.T) {
t.Fatalf("err: %s", err)
}
if result.Variables == nil || len(result.Variables) != 2 {
if result.Variables == nil || len(result.Variables) != 3 {
t.Fatalf("bad vars: %#v", result.Variables)
}
@ -395,6 +423,14 @@ func TestParseTemplate_Variables(t *testing.T) {
if !result.Variables["bar"].Required {
t.Fatal("bar should be required")
}
if result.Variables["baz"].Default != "27" {
t.Fatal("default should be empty")
}
if result.Variables["baz"].Required {
t.Fatal("baz should not be required")
}
}
func TestParseTemplate_variablesBadDefault(t *testing.T) {
@ -663,6 +699,386 @@ func TestTemplate_Build(t *testing.T) {
}
}
func TestTemplateBuild_exceptOnlyPP(t *testing.T) {
data := `
{
"builders": [
{
"name": "test1",
"type": "test-builder"
},
{
"name": "test2",
"type": "test-builder"
}
],
"post-processors": [
{
"type": "test-pp",
"except": ["test1"],
"only": ["test1"]
}
]
}
`
_, err := ParseTemplate([]byte(data))
if err == nil {
t.Fatal("should have error")
}
}
func TestTemplateBuild_exceptOnlyProv(t *testing.T) {
data := `
{
"builders": [
{
"name": "test1",
"type": "test-builder"
},
{
"name": "test2",
"type": "test-builder"
}
],
"provisioners": [
{
"type": "test-prov",
"except": ["test1"],
"only": ["test1"]
}
]
}
`
_, err := ParseTemplate([]byte(data))
if err == nil {
t.Fatal("should have error")
}
}
func TestTemplateBuild_exceptPPInvalid(t *testing.T) {
data := `
{
"builders": [
{
"name": "test1",
"type": "test-builder"
},
{
"name": "test2",
"type": "test-builder"
}
],
"post-processors": [
{
"type": "test-pp",
"except": ["test5"]
}
]
}
`
_, err := ParseTemplate([]byte(data))
if err == nil {
t.Fatal("should have error")
}
}
func TestTemplateBuild_exceptPP(t *testing.T) {
data := `
{
"builders": [
{
"name": "test1",
"type": "test-builder"
},
{
"name": "test2",
"type": "test-builder"
}
],
"post-processors": [
{
"type": "test-pp",
"except": ["test1"]
}
]
}
`
template, err := ParseTemplate([]byte(data))
if err != nil {
t.Fatalf("err: %s", err)
}
// Verify test1 has no post-processors
build, err := template.Build("test1", testTemplateComponentFinder())
if err != nil {
t.Fatalf("err: %s", err)
}
cbuild := build.(*coreBuild)
if len(cbuild.postProcessors) > 0 {
t.Fatal("should have no postProcessors")
}
// Verify test2 has no post-processors
build, err = template.Build("test2", testTemplateComponentFinder())
if err != nil {
t.Fatalf("err: %s", err)
}
cbuild = build.(*coreBuild)
if len(cbuild.postProcessors) != 1 {
t.Fatalf("invalid: %d", len(cbuild.postProcessors))
}
}
func TestTemplateBuild_exceptProvInvalid(t *testing.T) {
data := `
{
"builders": [
{
"name": "test1",
"type": "test-builder"
},
{
"name": "test2",
"type": "test-builder"
}
],
"provisioners": [
{
"type": "test-prov",
"except": ["test5"]
}
]
}
`
_, err := ParseTemplate([]byte(data))
if err == nil {
t.Fatal("should have error")
}
}
func TestTemplateBuild_exceptProv(t *testing.T) {
data := `
{
"builders": [
{
"name": "test1",
"type": "test-builder"
},
{
"name": "test2",
"type": "test-builder"
}
],
"provisioners": [
{
"type": "test-prov",
"except": ["test1"]
}
]
}
`
template, err := ParseTemplate([]byte(data))
if err != nil {
t.Fatalf("err: %s", err)
}
// Verify test1 has no provisioners
build, err := template.Build("test1", testTemplateComponentFinder())
if err != nil {
t.Fatalf("err: %s", err)
}
cbuild := build.(*coreBuild)
if len(cbuild.provisioners) > 0 {
t.Fatal("should have no provisioners")
}
// Verify test2 has no provisioners
build, err = template.Build("test2", testTemplateComponentFinder())
if err != nil {
t.Fatalf("err: %s", err)
}
cbuild = build.(*coreBuild)
if len(cbuild.provisioners) != 1 {
t.Fatalf("invalid: %d", len(cbuild.provisioners))
}
}
func TestTemplateBuild_onlyPPInvalid(t *testing.T) {
data := `
{
"builders": [
{
"name": "test1",
"type": "test-builder"
},
{
"name": "test2",
"type": "test-builder"
}
],
"post-processors": [
{
"type": "test-pp",
"only": ["test5"]
}
]
}
`
_, err := ParseTemplate([]byte(data))
if err == nil {
t.Fatal("should have error")
}
}
func TestTemplateBuild_onlyPP(t *testing.T) {
data := `
{
"builders": [
{
"name": "test1",
"type": "test-builder"
},
{
"name": "test2",
"type": "test-builder"
}
],
"post-processors": [
{
"type": "test-pp",
"only": ["test2"]
}
]
}
`
template, err := ParseTemplate([]byte(data))
if err != nil {
t.Fatalf("err: %s", err)
}
// Verify test1 has no post-processors
build, err := template.Build("test1", testTemplateComponentFinder())
if err != nil {
t.Fatalf("err: %s", err)
}
cbuild := build.(*coreBuild)
if len(cbuild.postProcessors) > 0 {
t.Fatal("should have no postProcessors")
}
// Verify test2 has no post-processors
build, err = template.Build("test2", testTemplateComponentFinder())
if err != nil {
t.Fatalf("err: %s", err)
}
cbuild = build.(*coreBuild)
if len(cbuild.postProcessors) != 1 {
t.Fatalf("invalid: %d", len(cbuild.postProcessors))
}
}
func TestTemplateBuild_onlyProvInvalid(t *testing.T) {
data := `
{
"builders": [
{
"name": "test1",
"type": "test-builder"
},
{
"name": "test2",
"type": "test-builder"
}
],
"provisioners": [
{
"type": "test-prov",
"only": ["test5"]
}
]
}
`
_, err := ParseTemplate([]byte(data))
if err == nil {
t.Fatal("should have error")
}
}
func TestTemplateBuild_onlyProv(t *testing.T) {
data := `
{
"builders": [
{
"name": "test1",
"type": "test-builder"
},
{
"name": "test2",
"type": "test-builder"
}
],
"provisioners": [
{
"type": "test-prov",
"only": ["test2"]
}
]
}
`
template, err := ParseTemplate([]byte(data))
if err != nil {
t.Fatalf("err: %s", err)
}
// Verify test1 has no provisioners
build, err := template.Build("test1", testTemplateComponentFinder())
if err != nil {
t.Fatalf("err: %s", err)
}
cbuild := build.(*coreBuild)
if len(cbuild.provisioners) > 0 {
t.Fatal("should have no provisioners")
}
// Verify test2 has no provisioners
build, err = template.Build("test2", testTemplateComponentFinder())
if err != nil {
t.Fatalf("err: %s", err)
}
cbuild = build.(*coreBuild)
if len(cbuild.provisioners) != 1 {
t.Fatalf("invalid: %d", len(cbuild.provisioners))
}
}
func TestTemplate_Build_ProvisionerOverride(t *testing.T) {
assert := asserts.NewTestingAsserts(t, true)

View File

@ -10,7 +10,7 @@ import (
var GitCommit string
// The version of packer.
const Version = "0.3.7"
const Version = "0.3.10"
// Any pre-release marker for the version. If this is "" (empty string),
// then it means that it is a final release. Otherwise, this is the

View File

@ -0,0 +1,10 @@
package main
import (
"github.com/mitchellh/packer/packer/plugin"
"github.com/mitchellh/packer/provisioner/puppet-masterless"
)
func main() {
plugin.ServeProvisioner(new(puppetmasterless.Provisioner))
}

View File

@ -0,0 +1 @@
package main

View File

@ -24,15 +24,12 @@ type Config struct {
}
type PostProcessor struct {
config Config
premade map[string]packer.PostProcessor
rawConfigs []interface{}
config Config
premade map[string]packer.PostProcessor
extraConfig map[string]interface{}
}
func (p *PostProcessor) Configure(raws ...interface{}) error {
// Store the raw configs for usage later
p.rawConfigs = raws
_, err := common.DecodeConfig(&p.config, raws...)
if err != nil {
return err
@ -45,10 +42,8 @@ func (p *PostProcessor) Configure(raws ...interface{}) error {
tpl.UserVars = p.config.PackerUserVars
// Defaults
ppExtraConfig := make(map[string]interface{})
if p.config.OutputPath == "" {
p.config.OutputPath = "packer_{{ .BuildName }}_{{.Provider}}.box"
ppExtraConfig["output"] = p.config.OutputPath
}
// Accumulate any errors
@ -58,10 +53,18 @@ func (p *PostProcessor) Configure(raws ...interface{}) error {
errs, fmt.Errorf("Error parsing output template: %s", err))
}
// Store the extra configuration for post-processors
p.rawConfigs = append(p.rawConfigs, ppExtraConfig)
// Store extra configuration we'll send to each post-processor type
p.extraConfig = make(map[string]interface{})
p.extraConfig["output"] = p.config.OutputPath
p.extraConfig["packer_build_name"] = p.config.PackerBuildName
p.extraConfig["packer_builder_type"] = p.config.PackerBuilderType
p.extraConfig["packer_debug"] = p.config.PackerDebug
p.extraConfig["packer_force"] = p.config.PackerForce
p.extraConfig["packer_user_variables"] = p.config.PackerUserVars
// TODO(mitchellh): Properly handle multiple raw configs
// TODO(mitchellh): Properly handle multiple raw configs. This isn't
// very pressing at the moment because at the time of this comment
// only the first member of raws can contain the actual type-overrides.
var mapConfig map[string]interface{}
if err := mapstructure.Decode(raws[0], &mapConfig); err != nil {
errs = packer.MultiErrorAppend(errs,
@ -71,18 +74,14 @@ func (p *PostProcessor) Configure(raws ...interface{}) error {
p.premade = make(map[string]packer.PostProcessor)
for k, raw := range mapConfig {
pp := keyToPostProcessor(k)
if pp == nil {
pp, err := p.subPostProcessor(k, raw, p.extraConfig)
if err != nil {
errs = packer.MultiErrorAppend(errs, err)
continue
}
// Create the proper list of configurations
ppConfigs := make([]interface{}, 0, len(p.rawConfigs)+1)
copy(ppConfigs, p.rawConfigs)
ppConfigs = append(ppConfigs, raw)
if err := pp.Configure(ppConfigs...); err != nil {
errs = packer.MultiErrorAppend(errs, err)
if pp == nil {
continue
}
p.premade[k] = pp
@ -106,13 +105,15 @@ func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (pac
pp, ok := p.premade[ppName]
if !ok {
log.Printf("Premade post-processor for '%s' not found. Creating.", ppName)
pp = keyToPostProcessor(ppName)
if pp == nil {
return nil, false, fmt.Errorf("Vagrant box post-processor not found: %s", ppName)
var err error
pp, err = p.subPostProcessor(ppName, nil, p.extraConfig)
if err != nil {
return nil, false, err
}
if err := pp.Configure(p.rawConfigs...); err != nil {
return nil, false, err
if pp == nil {
return nil, false, fmt.Errorf("Vagrant box post-processor not found: %s", ppName)
}
}
@ -120,6 +121,21 @@ func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (pac
return pp.PostProcess(ui, artifact)
}
func (p *PostProcessor) subPostProcessor(key string, specific interface{}, extra map[string]interface{}) (packer.PostProcessor, error) {
pp := keyToPostProcessor(key)
if pp == nil {
return nil, nil
}
if err := pp.Configure(extra, specific); err != nil {
return nil, err
}
return pp, nil
}
// keyToPostProcessor maps a configuration key to the actual post-processor
// it will be configuring. This returns a new instance of that post-processor.
func keyToPostProcessor(key string) packer.PostProcessor {
switch key {
case "aws":

View File

@ -9,6 +9,7 @@ import (
"fmt"
"github.com/mitchellh/packer/common"
"github.com/mitchellh/packer/packer"
"io/ioutil"
"os"
"path/filepath"
"strings"
@ -17,7 +18,12 @@ import (
type Config struct {
common.PackerConfig `mapstructure:",squash"`
ChefEnvironment string `mapstructure:"chef_environment"`
ConfigTemplate string `mapstructure:"config_template"`
CookbookPaths []string `mapstructure:"cookbook_paths"`
RolesPath string `mapstructure:"roles_path"`
DataBagsPath string `mapstructure:"data_bags_path"`
EnvironmentsPath string `mapstructure:"environments_path"`
ExecuteCommand string `mapstructure:"execute_command"`
InstallCommand string `mapstructure:"install_command"`
RemoteCookbookPaths []string `mapstructure:"remote_cookbook_paths"`
@ -35,7 +41,18 @@ type Provisioner struct {
}
type ConfigTemplate struct {
CookbookPaths string
CookbookPaths string
DataBagsPath string
RolesPath string
EnvironmentsPath string
ChefEnvironment string
// Templates don't support boolean statements until Go 1.2. In the
// mean time, we do this.
// TODO(mitchellh): Remove when Go 1.2 is released
HasDataBagsPath bool
HasRolesPath bool
HasEnvironmentsPath bool
}
type ExecuteTemplate struct {
@ -80,7 +97,12 @@ func (p *Provisioner) Prepare(raws ...interface{}) error {
errs := common.CheckUnusedConfig(md)
templates := map[string]*string{
"staging_dir": &p.config.StagingDir,
"config_template": &p.config.ConfigTemplate,
"data_bags_path": &p.config.DataBagsPath,
"roles_path": &p.config.RolesPath,
"staging_dir": &p.config.StagingDir,
"environments_path": &p.config.EnvironmentsPath,
"chef_environment": &p.config.ChefEnvironment,
}
for n, ptr := range templates {
@ -121,6 +143,17 @@ func (p *Provisioner) Prepare(raws ...interface{}) error {
}
}
if p.config.ConfigTemplate != "" {
fi, err := os.Stat(p.config.ConfigTemplate)
if err != nil {
errs = packer.MultiErrorAppend(
errs, fmt.Errorf("Bad config template path: %s", err))
} else if fi.IsDir() {
errs = packer.MultiErrorAppend(
errs, fmt.Errorf("Config template path must be a file: %s", err))
}
}
for _, path := range p.config.CookbookPaths {
pFileInfo, err := os.Stat(path)
@ -130,6 +163,33 @@ func (p *Provisioner) Prepare(raws ...interface{}) error {
}
}
if p.config.RolesPath != "" {
pFileInfo, err := os.Stat(p.config.RolesPath)
if err != nil || !pFileInfo.IsDir() {
errs = packer.MultiErrorAppend(
errs, fmt.Errorf("Bad roles path '%s': %s", p.config.RolesPath, err))
}
}
if p.config.DataBagsPath != "" {
pFileInfo, err := os.Stat(p.config.DataBagsPath)
if err != nil || !pFileInfo.IsDir() {
errs = packer.MultiErrorAppend(
errs, fmt.Errorf("Bad data bags path '%s': %s", p.config.DataBagsPath, err))
}
}
if p.config.EnvironmentsPath != "" {
pFileInfo, err := os.Stat(p.config.EnvironmentsPath)
if err != nil || !pFileInfo.IsDir() {
errs = packer.MultiErrorAppend(
errs, fmt.Errorf("Bad environments path '%s': %s", p.config.EnvironmentsPath, err))
}
}
// Process the user variables within the JSON and set the JSON.
// Do this early so that we can validate and show errors.
p.config.Json, err = p.processJsonUserVars()
@ -166,7 +226,31 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error {
cookbookPaths = append(cookbookPaths, targetPath)
}
configPath, err := p.createConfig(ui, comm, cookbookPaths)
rolesPath := ""
if p.config.RolesPath != "" {
rolesPath = fmt.Sprintf("%s/roles", p.config.StagingDir)
if err := p.uploadDirectory(ui, comm, rolesPath, p.config.RolesPath); err != nil {
return fmt.Errorf("Error uploading roles: %s", err)
}
}
dataBagsPath := ""
if p.config.DataBagsPath != "" {
dataBagsPath = fmt.Sprintf("%s/data_bags", p.config.StagingDir)
if err := p.uploadDirectory(ui, comm, dataBagsPath, p.config.DataBagsPath); err != nil {
return fmt.Errorf("Error uploading data bags: %s", err)
}
}
environmentsPath := ""
if p.config.EnvironmentsPath != "" {
environmentsPath = fmt.Sprintf("%s/environments", p.config.StagingDir)
if err := p.uploadDirectory(ui, comm, environmentsPath, p.config.EnvironmentsPath); err != nil {
return fmt.Errorf("Error uploading environments: %s", err)
}
}
configPath, err := p.createConfig(ui, comm, cookbookPaths, rolesPath, dataBagsPath, environmentsPath, p.config.ChefEnvironment)
if err != nil {
return fmt.Errorf("Error creating Chef config file: %s", err)
}
@ -203,7 +287,7 @@ func (p *Provisioner) uploadDirectory(ui packer.Ui, comm packer.Communicator, ds
return comm.UploadDir(dst, src, nil)
}
func (p *Provisioner) createConfig(ui packer.Ui, comm packer.Communicator, localCookbooks []string) (string, error) {
func (p *Provisioner) createConfig(ui packer.Ui, comm packer.Communicator, localCookbooks []string, rolesPath string, dataBagsPath string, environmentsPath string, chefEnvironment string) (string, error) {
ui.Message("Creating configuration file 'solo.rb'")
cookbook_paths := make([]string, len(p.config.RemoteCookbookPaths)+len(localCookbooks))
@ -216,8 +300,32 @@ func (p *Provisioner) createConfig(ui packer.Ui, comm packer.Communicator, local
cookbook_paths[i] = fmt.Sprintf(`"%s"`, path)
}
configString, err := p.config.tpl.Process(DefaultConfigTemplate, &ConfigTemplate{
CookbookPaths: strings.Join(cookbook_paths, ","),
// Read the template
tpl := DefaultConfigTemplate
if p.config.ConfigTemplate != "" {
f, err := os.Open(p.config.ConfigTemplate)
if err != nil {
return "", err
}
defer f.Close()
tplBytes, err := ioutil.ReadAll(f)
if err != nil {
return "", err
}
tpl = string(tplBytes)
}
configString, err := p.config.tpl.Process(tpl, &ConfigTemplate{
CookbookPaths: strings.Join(cookbook_paths, ","),
RolesPath: rolesPath,
DataBagsPath: dataBagsPath,
EnvironmentsPath: environmentsPath,
HasRolesPath: rolesPath != "",
HasDataBagsPath: dataBagsPath != "",
HasEnvironmentsPath: environmentsPath != "",
ChefEnvironment: chefEnvironment,
})
if err != nil {
return "", err
@ -368,5 +476,15 @@ func (p *Provisioner) processJsonUserVars() (map[string]interface{}, error) {
}
var DefaultConfigTemplate = `
cookbook_path [{{.CookbookPaths}}]
cookbook_path [{{.CookbookPaths}}]
{{if .HasRolesPath}}
role_path "{{.RolesPath}}"
{{end}}
{{if .HasDataBagsPath}}
data_bag_path "{{.DataBagsPath}}"
{{end}}
{{if .HasEnvironmentsPath}}
environments_path "{{.EnvironmentsPath}}"
chef_environment "{{.ChefEnvironment}}"
{{end}}
`

View File

@ -19,6 +19,65 @@ func TestProvisioner_Impl(t *testing.T) {
}
}
func TestProvisionerPrepare_chefEnvironment(t *testing.T) {
var p Provisioner
config := testConfig()
config["chef_environment"] = "some-env"
err := p.Prepare(config)
if err != nil {
t.Fatalf("err: %s", err)
}
if p.config.ChefEnvironment != "some-env" {
t.Fatalf("unexpected: %#v", p.config.ChefEnvironment)
}
}
func TestProvisionerPrepare_configTemplate(t *testing.T) {
var err error
var p Provisioner
// Test no config template
config := testConfig()
delete(config, "config_template")
err = p.Prepare(config)
if err != nil {
t.Fatalf("err: %s", err)
}
// Test with a file
tf, err := ioutil.TempFile("", "packer")
if err != nil {
t.Fatalf("err: %s", err)
}
defer os.Remove(tf.Name())
config = testConfig()
config["config_template"] = tf.Name()
p = Provisioner{}
err = p.Prepare(config)
if err != nil {
t.Fatalf("err: %s", err)
}
// Test with a directory
td, err := ioutil.TempDir("", "packer")
if err != nil {
t.Fatalf("err: %s", err)
}
defer os.RemoveAll(td)
config = testConfig()
config["config_template"] = td
p = Provisioner{}
err = p.Prepare(config)
if err == nil {
t.Fatal("should have err")
}
}
func TestProvisionerPrepare_cookbookPaths(t *testing.T) {
var p Provisioner
@ -32,11 +91,25 @@ func TestProvisionerPrepare_cookbookPaths(t *testing.T) {
t.Fatalf("err: %s", err)
}
rolesPath, err := ioutil.TempDir("", "roles")
if err != nil {
t.Fatalf("err: %s", err)
}
dataBagsPath, err := ioutil.TempDir("", "data_bags")
if err != nil {
t.Fatalf("err: %s", err)
}
defer os.Remove(path1)
defer os.Remove(path2)
defer os.Remove(rolesPath)
defer os.Remove(dataBagsPath)
config := testConfig()
config["cookbook_paths"] = []string{path1, path2}
config["roles_path"] = rolesPath
config["data_bags_path"] = dataBagsPath
err = p.Prepare(config)
if err != nil {
@ -50,6 +123,80 @@ func TestProvisionerPrepare_cookbookPaths(t *testing.T) {
if p.config.CookbookPaths[0] != path1 || p.config.CookbookPaths[1] != path2 {
t.Fatalf("unexpected: %#v", p.config.CookbookPaths)
}
if p.config.RolesPath != rolesPath {
t.Fatalf("unexpected: %#v", p.config.RolesPath)
}
if p.config.DataBagsPath != dataBagsPath {
t.Fatalf("unexpected: %#v", p.config.DataBagsPath)
}
}
func TestProvisionerPrepare_dataBagsPath(t *testing.T) {
var p Provisioner
dataBagsPath, err := ioutil.TempDir("", "data_bags")
if err != nil {
t.Fatalf("err: %s", err)
}
defer os.Remove(dataBagsPath)
config := testConfig()
config["data_bags_path"] = dataBagsPath
err = p.Prepare(config)
if err != nil {
t.Fatalf("err: %s", err)
}
if p.config.DataBagsPath != dataBagsPath {
t.Fatalf("unexpected: %#v", p.config.DataBagsPath)
}
}
func TestProvisionerPrepare_environmentsPath(t *testing.T) {
var p Provisioner
environmentsPath, err := ioutil.TempDir("", "environments")
if err != nil {
t.Fatalf("err: %s", err)
}
defer os.Remove(environmentsPath)
config := testConfig()
config["environments_path"] = environmentsPath
err = p.Prepare(config)
if err != nil {
t.Fatalf("err: %s", err)
}
if p.config.EnvironmentsPath != environmentsPath {
t.Fatalf("unexpected: %#v", p.config.EnvironmentsPath)
}
}
func TestProvisionerPrepare_rolesPath(t *testing.T) {
var p Provisioner
rolesPath, err := ioutil.TempDir("", "roles")
if err != nil {
t.Fatalf("err: %s", err)
}
defer os.Remove(rolesPath)
config := testConfig()
config["roles_path"] = rolesPath
err = p.Prepare(config)
if err != nil {
t.Fatalf("err: %s", err)
}
if p.config.RolesPath != rolesPath {
t.Fatalf("unexpected: %#v", p.config.RolesPath)
}
}
func TestProvisionerPrepare_json(t *testing.T) {

View File

@ -72,6 +72,17 @@ func (p *Provisioner) Prepare(raws ...interface{}) error {
func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error {
ui.Say(fmt.Sprintf("Uploading %s => %s", p.config.Source, p.config.Destination))
info, err := os.Stat(p.config.Source)
if err != nil {
return err
}
// If we're uploading a directory, short circuit and do that
if info.IsDir() {
return comm.UploadDir(p.config.Destination, p.config.Source, nil)
}
// We're uploading a file...
f, err := os.Open(p.config.Source)
if err != nil {
return err

View File

@ -0,0 +1,334 @@
// This package implements a provisioner for Packer that executes
// Puppet on the remote machine, configured to apply a local manifest
// versus connecting to a Puppet master.
package puppetmasterless
import (
"fmt"
"github.com/mitchellh/packer/common"
"github.com/mitchellh/packer/packer"
"os"
"path/filepath"
"strings"
)
type Config struct {
common.PackerConfig `mapstructure:",squash"`
tpl *packer.ConfigTemplate
// The command used to execute Puppet.
ExecuteCommand string `mapstructure:"execute_command"`
// Additional facts to set when executing Puppet
Facter map[string]string
// Path to a hiera configuration file to upload and use.
HieraConfigPath string `mapstructure:"hiera_config_path"`
// An array of local paths of modules to upload.
ModulePaths []string `mapstructure:"module_paths"`
// The main manifest file to apply to kick off the entire thing.
ManifestFile string `mapstructure:"manifest_file"`
// If true, `sudo` will NOT be used to execute Puppet.
PreventSudo bool `mapstructure:"prevent_sudo"`
// The directory where files will be uploaded. Packer requires write
// permissions in this directory.
StagingDir string `mapstructure:"staging_directory"`
}
type Provisioner struct {
config Config
}
type ExecuteTemplate struct {
FacterVars string
HasHieraConfigPath bool
HieraConfigPath string
ModulePath string
ManifestFile string
Sudo bool
}
func (p *Provisioner) Prepare(raws ...interface{}) error {
md, err := common.DecodeConfig(&p.config, raws...)
if err != nil {
return err
}
p.config.tpl, err = packer.NewConfigTemplate()
if err != nil {
return err
}
p.config.tpl.UserVars = p.config.PackerUserVars
// Accumulate any errors
errs := common.CheckUnusedConfig(md)
// Set some defaults
if p.config.ExecuteCommand == "" {
p.config.ExecuteCommand = "{{.FacterVars}} {{if .Sudo}} sudo -E {{end}}" +
"puppet apply --verbose --modulepath='{{.ModulePath}}' " +
"{{if .HasHieraConfigPath}}--hiera_config='{{.HieraConfigPath}}' {{end}}" +
"--detailed-exitcodes " +
"{{.ManifestFile}}"
}
if p.config.StagingDir == "" {
p.config.StagingDir = "/tmp/packer-puppet-masterless"
}
// Templates
templates := map[string]*string{
"hiera_config_path": &p.config.HieraConfigPath,
"manifest_file": &p.config.ManifestFile,
"staging_dir": &p.config.StagingDir,
}
for n, ptr := range templates {
var err error
*ptr, err = p.config.tpl.Process(*ptr, nil)
if err != nil {
errs = packer.MultiErrorAppend(
errs, fmt.Errorf("Error processing %s: %s", n, err))
}
}
sliceTemplates := map[string][]string{
"module_paths": p.config.ModulePaths,
}
for n, slice := range sliceTemplates {
for i, elem := range slice {
var err error
slice[i], err = p.config.tpl.Process(elem, nil)
if err != nil {
errs = packer.MultiErrorAppend(
errs, fmt.Errorf("Error processing %s[%d]: %s", n, i, err))
}
}
}
validates := map[string]*string{
"execute_command": &p.config.ExecuteCommand,
}
for n, ptr := range validates {
if err := p.config.tpl.Validate(*ptr); err != nil {
errs = packer.MultiErrorAppend(
errs, fmt.Errorf("Error parsing %s: %s", n, err))
}
}
newFacts := make(map[string]string)
for k, v := range p.config.Facter {
k, err := p.config.tpl.Process(k, nil)
if err != nil {
errs = packer.MultiErrorAppend(errs,
fmt.Errorf("Error processing facter key %s: %s", k, err))
continue
}
v, err := p.config.tpl.Process(v, nil)
if err != nil {
errs = packer.MultiErrorAppend(errs,
fmt.Errorf("Error processing facter value '%s': %s", v, err))
continue
}
newFacts[k] = v
}
p.config.Facter = newFacts
// Validation
if p.config.HieraConfigPath != "" {
info, err := os.Stat(p.config.ManifestFile)
if err != nil {
errs = packer.MultiErrorAppend(errs,
fmt.Errorf("hiera_config_path is invalid: %s", err))
} else if info.IsDir() {
errs = packer.MultiErrorAppend(errs,
fmt.Errorf("hiera_config_path must point to a file"))
}
}
if p.config.ManifestFile == "" {
errs = packer.MultiErrorAppend(errs,
fmt.Errorf("A manifest_file must be specified."))
} else {
info, err := os.Stat(p.config.ManifestFile)
if err != nil {
errs = packer.MultiErrorAppend(errs,
fmt.Errorf("manifest_file is invalid: %s", err))
} else if info.IsDir() {
errs = packer.MultiErrorAppend(errs,
fmt.Errorf("manifest_file must point to a file"))
}
}
for i, path := range p.config.ModulePaths {
info, err := os.Stat(path)
if err != nil {
errs = packer.MultiErrorAppend(errs,
fmt.Errorf("module_path[%d] is invalid: %s", i, err))
} else if !info.IsDir() {
errs = packer.MultiErrorAppend(errs,
fmt.Errorf("module_path[%d] must point to a directory"))
}
}
if errs != nil && len(errs.Errors) > 0 {
return errs
}
return nil
}
func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error {
ui.Say("Provisioning with Puppet...")
ui.Message("Creating Puppet staging directory...")
if err := p.createDir(ui, comm, p.config.StagingDir); err != nil {
return fmt.Errorf("Error creating staging directory: %s", err)
}
// Upload hiera config if set
remoteHieraConfigPath := ""
if p.config.HieraConfigPath != "" {
var err error
remoteHieraConfigPath, err = p.uploadHieraConfig(ui, comm)
if err != nil {
return fmt.Errorf("Error uploading hiera config: %s", err)
}
}
// Upload all modules
modulePaths := make([]string, 0, len(p.config.ModulePaths))
for i, path := range p.config.ModulePaths {
ui.Message(fmt.Sprintf("Uploading local modules from: %s", path))
targetPath := fmt.Sprintf("%s/module-%d", p.config.StagingDir, i)
if err := p.uploadDirectory(ui, comm, targetPath, path); err != nil {
return fmt.Errorf("Error uploading modules: %s", err)
}
modulePaths = append(modulePaths, targetPath)
}
// Upload manifests
remoteManifestFile, err := p.uploadManifests(ui, comm)
if err != nil {
return fmt.Errorf("Error uploading manifests: %s", err)
}
// Compile the facter variables
facterVars := make([]string, 0, len(p.config.Facter))
for k, v := range p.config.Facter {
facterVars = append(facterVars, fmt.Sprintf("FACTER_%s='%s'", k, v))
}
// Execute Puppet
command, err := p.config.tpl.Process(p.config.ExecuteCommand, &ExecuteTemplate{
FacterVars: strings.Join(facterVars, " "),
HasHieraConfigPath: remoteHieraConfigPath != "",
HieraConfigPath: remoteHieraConfigPath,
ManifestFile: remoteManifestFile,
ModulePath: strings.Join(modulePaths, ":"),
Sudo: !p.config.PreventSudo,
})
if err != nil {
return err
}
cmd := &packer.RemoteCmd{
Command: command,
}
ui.Message(fmt.Sprintf("Running Puppet: %s", command))
if err := cmd.StartWithUi(comm, ui); err != nil {
return err
}
if cmd.ExitStatus != 0 && cmd.ExitStatus != 2 {
return fmt.Errorf("Puppet exited with a non-zero exit status: %d", cmd.ExitStatus)
}
return nil
}
func (p *Provisioner) Cancel() {
// Just hard quit. It isn't a big deal if what we're doing keeps
// running on the other side.
os.Exit(0)
}
func (p *Provisioner) uploadHieraConfig(ui packer.Ui, comm packer.Communicator) (string, error) {
ui.Message("Uploading hiera configuration...")
f, err := os.Open(p.config.HieraConfigPath)
if err != nil {
return "", err
}
defer f.Close()
path := fmt.Sprintf("%s/hiera.yaml", p.config.StagingDir)
if err := comm.Upload(path, f); err != nil {
return "", err
}
return path, nil
}
func (p *Provisioner) uploadManifests(ui packer.Ui, comm packer.Communicator) (string, error) {
// Create the remote manifests directory...
ui.Message("Uploading manifests...")
remoteManifestsPath := fmt.Sprintf("%s/manifests", p.config.StagingDir)
if err := p.createDir(ui, comm, remoteManifestsPath); err != nil {
return "", fmt.Errorf("Error creating manifests directory: %s", err)
}
// Upload the main manifest
f, err := os.Open(p.config.ManifestFile)
if err != nil {
return "", err
}
defer f.Close()
manifestFilename := filepath.Base(p.config.ManifestFile)
remoteManifestFile := fmt.Sprintf("%s/%s", remoteManifestsPath, manifestFilename)
if err := comm.Upload(remoteManifestFile, f); err != nil {
return "", err
}
return remoteManifestFile, nil
}
func (p *Provisioner) createDir(ui packer.Ui, comm packer.Communicator, dir string) error {
cmd := &packer.RemoteCmd{
Command: fmt.Sprintf("mkdir -p '%s'", dir),
}
if err := cmd.StartWithUi(comm, ui); err != nil {
return err
}
if cmd.ExitStatus != 0 {
return fmt.Errorf("Non-zero exit status.")
}
return nil
}
func (p *Provisioner) uploadDirectory(ui packer.Ui, comm packer.Communicator, dst string, src string) error {
if err := p.createDir(ui, comm, dst); err != nil {
return err
}
// Make sure there is a trailing "/" so that the directory isn't
// created on the other side.
if src[len(src)-1] != '/' {
src = src + "/"
}
return comm.UploadDir(dst, src, nil)
}

View File

@ -0,0 +1,110 @@
package puppetmasterless
import (
"github.com/mitchellh/packer/packer"
"io/ioutil"
"os"
"testing"
)
func testConfig() map[string]interface{} {
tf, err := ioutil.TempFile("", "packer")
if err != nil {
panic(err)
}
return map[string]interface{}{
"manifest_file": tf.Name(),
}
}
func TestProvisioner_Impl(t *testing.T) {
var raw interface{}
raw = &Provisioner{}
if _, ok := raw.(packer.Provisioner); !ok {
t.Fatalf("must be a Provisioner")
}
}
func TestProvisionerPrepare_hieraConfigPath(t *testing.T) {
config := testConfig()
delete(config, "hiera_config_path")
p := new(Provisioner)
err := p.Prepare(config)
if err != nil {
t.Fatalf("err: %s", err)
}
// Test with a good one
tf, err := ioutil.TempFile("", "packer")
if err != nil {
t.Fatalf("error tempfile: %s", err)
}
defer os.Remove(tf.Name())
config["hiera_config_path"] = tf.Name()
p = new(Provisioner)
err = p.Prepare(config)
if err != nil {
t.Fatalf("err: %s", err)
}
}
func TestProvisionerPrepare_manifestFile(t *testing.T) {
config := testConfig()
delete(config, "manifest_file")
p := new(Provisioner)
err := p.Prepare(config)
if err == nil {
t.Fatal("should be an error")
}
// Test with a good one
tf, err := ioutil.TempFile("", "packer")
if err != nil {
t.Fatalf("error tempfile: %s", err)
}
defer os.Remove(tf.Name())
config["manifest_file"] = tf.Name()
p = new(Provisioner)
err = p.Prepare(config)
if err != nil {
t.Fatalf("err: %s", err)
}
}
func TestProvisionerPrepare_modulePaths(t *testing.T) {
config := testConfig()
delete(config, "module_paths")
p := new(Provisioner)
err := p.Prepare(config)
if err != nil {
t.Fatalf("err: %s", err)
}
// Test with bad paths
config["module_paths"] = []string{"i-should-not-exist"}
p = new(Provisioner)
err = p.Prepare(config)
if err == nil {
t.Fatal("should be an error")
}
// Test with a good one
td, err := ioutil.TempDir("", "packer")
if err != nil {
t.Fatalf("error: %s", err)
}
defer os.RemoveAll(td)
config["module_paths"] = []string{td}
p = new(Provisioner)
err = p.Prepare(config)
if err != nil {
t.Fatalf("err: %s", err)
}
}

View File

@ -8,6 +8,7 @@ import (
"fmt"
"github.com/mitchellh/packer/common"
"github.com/mitchellh/packer/packer"
"io"
"io/ioutil"
"log"
"os"
@ -20,6 +21,10 @@ const DefaultRemotePath = "/tmp/script.sh"
type config struct {
common.PackerConfig `mapstructure:",squash"`
// If true, the script contains binary and line endings will not be
// converted from Windows to Unix-style.
Binary bool
// An inline script to execute. Multiple strings are all executed
// in the context of a single shell.
Inline []string
@ -259,7 +264,12 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error {
return err
}
if err := comm.Upload(p.config.RemotePath, f); err != nil {
var r io.Reader = f
if !p.config.Binary {
r = &UnixReader{Reader: r}
}
if err := comm.Upload(p.config.RemotePath, r); err != nil {
return fmt.Errorf("Error uploading script: %s", err)
}

View File

@ -0,0 +1,58 @@
package shell
import (
"bufio"
"io"
"sync"
)
// UnixReader is a Reader implementation that automatically converts
// Windows line endings to Unix line endings.
type UnixReader struct {
Reader io.Reader
buf []byte
once sync.Once
scanner *bufio.Scanner
}
func (r *UnixReader) Read(p []byte) (n int, err error) {
// Create the buffered reader once
r.once.Do(func() {
r.scanner = bufio.NewScanner(r.Reader)
r.scanner.Split(scanUnixLine)
})
// If we have no data in our buffer, scan to the next token
if len(r.buf) == 0 {
if !r.scanner.Scan() {
err = r.scanner.Err()
if err == nil {
err = io.EOF
}
return 0, err
}
r.buf = r.scanner.Bytes()
}
// Write out as much data as we can to the buffer, storing the rest
// for the next read.
n = len(p)
if n > len(r.buf) {
n = len(r.buf)
}
copy(p, r.buf)
r.buf = r.buf[n:]
return
}
// scanUnixLine is a bufio.Scanner SplitFunc. It tokenizes on lines, but
// only returns unix-style lines. So even if the line is "one\r\n", the
// token returned will be "one\n".
func scanUnixLine(data []byte, atEOF bool) (advance int, token []byte, err error) {
advance, token, err = bufio.ScanLines(data, atEOF)
return advance, append(token, "\n"...), err
}

View File

@ -0,0 +1,51 @@
package shell
import (
"bytes"
"io"
"testing"
)
func TestUnixReader_impl(t *testing.T) {
var raw interface{}
raw = new(UnixReader)
if _, ok := raw.(io.Reader); !ok {
t.Fatal("should be reader")
}
}
func TestUnixReader(t *testing.T) {
input := "one\r\ntwo\n\r\nthree\r\n"
expected := "one\ntwo\n\nthree\n"
r := &UnixReader{
Reader: bytes.NewReader([]byte(input)),
}
result := new(bytes.Buffer)
if _, err := io.Copy(result, r); err != nil {
t.Fatalf("err: %s", err)
}
if result.String() != expected {
t.Fatalf("bad: %#v", result.String())
}
}
func TestUnixReader_unixOnly(t *testing.T) {
input := "\none\n\ntwo\nthree\n\n"
expected := "\none\n\ntwo\nthree\n\n"
r := &UnixReader{
Reader: bytes.NewReader([]byte(input)),
}
result := new(bytes.Buffer)
if _, err := io.Copy(result, r); err != nil {
t.Fatalf("err: %s", err)
}
if result.String() != expected {
t.Fatalf("bad: %#v", result.String())
}
}

View File

@ -20,6 +20,15 @@ cd $DIR
GIT_COMMIT=$(git rev-parse HEAD)
GIT_DIRTY=$(test -n "`git status --porcelain`" && echo "+CHANGES" || true)
# If we're building on Windows, specify an extension
EXTENSION=""
if [ "$(go env GOOS)" = "windows" ]; then
EXTENSION=".exe"
fi
# Make sure that if we're killed, we kill all our subprocseses
trap "kill 0" SIGINT SIGTERM EXIT
# If we're building a race-enabled build, then set that up.
if [ ! -z $PACKER_RACE ]; then
echo -e "${OK_COLOR}--> Building with race detection enabled${NO_COLOR}"
@ -29,21 +38,60 @@ fi
echo -e "${OK_COLOR}--> Installing dependencies to speed up builds...${NO_COLOR}"
go get ./...
# This function waits for all background tasks to complete
waitAll() {
RESULT=0
for job in `jobs -p`; do
wait $job
if [ $? -ne 0 ]; then
RESULT=1
fi
done
if [ $RESULT -ne 0 ]; then
exit $RESULT
fi
}
waitSingle() {
if [ ! -z $PACKER_NO_BUILD_PARALLEL ]; then
waitAll
fi
}
if [ -z $PACKER_NO_BUILD_PARALLEL ]; then
echo -e "${OK_COLOR}--> NOTE: Compilation of components " \
"will be done in parallel.${NO_COLOR}"
fi
# Compile the main Packer app
echo -e "${OK_COLOR}--> Compiling Packer${NO_COLOR}"
(
go build \
${PACKER_RACE} \
-ldflags "-X github.com/mitchellh/packer/packer.GitCommit ${GIT_COMMIT}${GIT_DIRTY}" \
-v \
-o bin/packer .
-o bin/packer${EXTENSION} .
) &
waitSingle
# Go over each plugin and build it
for PLUGIN in $(find ./plugin -mindepth 1 -maxdepth 1 -type d); do
PLUGIN_NAME=$(basename ${PLUGIN})
echo -e "${OK_COLOR}--> Compiling Plugin: ${PLUGIN_NAME}${NO_COLOR}"
(
go build \
${PACKER_RACE} \
-ldflags "-X github.com/mitchellh/packer/packer.GitCommit ${GIT_COMMIT}${GIT_DIRTY}" \
-v \
-o bin/packer-${PLUGIN_NAME} ${PLUGIN}
-o bin/packer-${PLUGIN_NAME}${EXTENSION} ${PLUGIN}
) &
waitSingle
done
waitAll
# Reset signal trapping to avoid "Terminated: 15" at the end
trap - SIGINT SIGTERM EXIT

View File

@ -2,11 +2,11 @@ source 'https://rubygems.org'
ruby '1.9.3'
gem "middleman", "~> 3.0.6"
gem "middleman-minify-html", "~> 3.0.0"
gem "middleman", "~> 3.1.5"
gem "middleman-minify-html", "~> 3.1.1"
gem "rack-contrib", "~> 1.1.0"
gem "redcarpet", "~> 2.2.2"
gem "therubyracer", "~> 0.10.2"
gem "redcarpet", "~> 3.0.0"
gem "therubyracer", "~> 0.12.0"
gem "thin", "~> 1.5.0"
group :development do

View File

@ -1,134 +1,109 @@
GEM
remote: https://rubygems.org/
specs:
POpen4 (0.1.4)
Platform (>= 0.4.0)
open4
Platform (0.4.0)
activesupport (3.2.9)
i18n (~> 0.6)
activesupport (3.2.14)
i18n (~> 0.6, >= 0.6.4)
multi_json (~> 1.0)
chunky_png (1.2.6)
chunky_png (1.2.8)
coffee-script (2.2.0)
coffee-script-source
execjs
coffee-script-source (1.3.3)
coffee-script-source (1.6.3)
compass (0.12.2)
chunky_png (~> 1.2)
fssm (>= 0.2.7)
sass (~> 3.1)
daemons (1.1.9)
eventmachine (1.0.0)
eventmachine (1.0.3)
execjs (1.4.0)
multi_json (~> 1.0)
ffi (1.2.0)
fssm (0.2.9)
haml (3.1.7)
highline (1.6.15)
hike (1.2.1)
htmlcompressor (0.0.3)
yui-compressor (~> 0.9.6)
http_router (0.10.2)
rack (>= 1.0.0)
url_mount (~> 0.2.1)
i18n (0.6.1)
libv8 (3.3.10.4)
listen (0.5.3)
maruku (0.6.1)
syntax (>= 1.0.0)
middleman (3.0.6)
middleman-core (= 3.0.6)
middleman-more (= 3.0.6)
middleman-sprockets (~> 3.0.2)
middleman-core (3.0.6)
activesupport (~> 3.2.6)
bundler (~> 1.1)
listen (~> 0.5.2)
rack (~> 1.4.1)
rack-test (~> 0.6.1)
rb-fsevent (~> 0.9.1)
rb-inotify (~> 0.8.8)
thor (~> 0.15.4)
tilt (~> 1.3.1)
middleman-minify-html (3.0.0)
htmlcompressor
middleman-core (~> 3.0.0)
middleman-more (3.0.6)
ffi (1.9.0)
fssm (0.2.10)
haml (4.0.3)
tilt
highline (1.6.19)
hike (1.2.3)
i18n (0.6.5)
kramdown (1.1.0)
libv8 (3.16.14.3)
listen (1.2.3)
rb-fsevent (>= 0.9.3)
rb-inotify (>= 0.9)
rb-kqueue (>= 0.2)
middleman (3.1.5)
coffee-script (~> 2.2.0)
coffee-script-source (~> 1.3.3)
compass (>= 0.12.2)
execjs (~> 1.4.0)
haml (>= 3.1.6)
i18n (~> 0.6.0)
maruku (~> 0.6.0)
middleman-core (= 3.0.6)
padrino-helpers (= 0.10.7)
kramdown (~> 1.1.0)
middleman-core (= 3.1.5)
middleman-more (= 3.1.5)
middleman-sprockets (>= 3.1.2)
sass (>= 3.1.20)
uglifier (~> 1.2.6)
middleman-sprockets (3.0.4)
middleman-more (~> 3.0.1)
sprockets (~> 2.1, < 2.5)
sprockets-sass (~> 0.8.0)
multi_json (1.4.0)
open4 (1.3.0)
padrino-core (0.10.7)
activesupport (~> 3.2.0)
http_router (~> 0.10.2)
sinatra (~> 1.3.1)
thor (~> 0.15.2)
tilt (~> 1.3.0)
padrino-helpers (0.10.7)
i18n (~> 0.6)
padrino-core (= 0.10.7)
rack (1.4.1)
uglifier (~> 2.1.0)
middleman-core (3.1.5)
activesupport (~> 3.2.6)
bundler (~> 1.1)
i18n (~> 0.6.1)
listen (~> 1.2.2)
rack (>= 1.4.5)
rack-test (~> 0.6.1)
thor (>= 0.15.2, < 2.0)
tilt (~> 1.3.6)
middleman-minify-html (3.1.1)
middleman-core (~> 3.0)
middleman-more (3.1.5)
middleman-sprockets (3.1.4)
middleman-core (>= 3.0.14)
middleman-more (>= 3.0.14)
sprockets (~> 2.1)
sprockets-helpers (~> 1.0.0)
sprockets-sass (~> 1.0.0)
multi_json (1.8.0)
rack (1.5.2)
rack-contrib (1.1.0)
rack (>= 0.9.1)
rack-protection (1.2.0)
rack
rack-test (0.6.2)
rack (>= 1.0)
rb-fsevent (0.9.2)
rb-inotify (0.8.8)
rb-fsevent (0.9.3)
rb-inotify (0.9.2)
ffi (>= 0.5.0)
redcarpet (2.2.2)
sass (3.2.3)
sinatra (1.3.3)
rack (~> 1.3, >= 1.3.6)
rack-protection (~> 1.2)
tilt (~> 1.3, >= 1.3.3)
sprockets (2.4.5)
rb-kqueue (0.2.0)
ffi (>= 0.5.0)
redcarpet (3.0.0)
ref (1.0.5)
sass (3.2.10)
sprockets (2.10.0)
hike (~> 1.2)
multi_json (~> 1.0)
rack (~> 1.0)
tilt (~> 1.1, != 1.3.0)
sprockets-sass (0.8.0)
sprockets-helpers (1.0.1)
sprockets (~> 2.0)
sprockets-sass (1.0.1)
sprockets (~> 2.0)
tilt (~> 1.1)
syntax (1.0.0)
therubyracer (0.10.2)
libv8 (~> 3.3.10)
thin (1.5.0)
therubyracer (0.12.0)
libv8 (~> 3.16.14.0)
ref
thin (1.5.1)
daemons (>= 1.0.9)
eventmachine (>= 0.12.6)
rack (>= 1.0.0)
thor (0.15.4)
tilt (1.3.3)
uglifier (1.2.7)
thor (0.18.1)
tilt (1.3.7)
uglifier (2.1.2)
execjs (>= 0.3.0)
multi_json (~> 1.3)
url_mount (0.2.1)
rack
yui-compressor (0.9.6)
POpen4 (>= 0.1.4)
multi_json (~> 1.0, >= 1.0.2)
PLATFORMS
ruby
DEPENDENCIES
highline (~> 1.6.15)
middleman (~> 3.0.6)
middleman-minify-html (~> 3.0.0)
middleman (~> 3.1.5)
middleman-minify-html (~> 3.1.1)
rack-contrib (~> 1.1.0)
redcarpet (~> 2.2.2)
therubyracer (~> 0.10.2)
redcarpet (~> 3.0.0)
therubyracer (~> 0.12.0)
thin (~> 1.5.0)

View File

@ -111,9 +111,11 @@ Optional:
of the source AMI will be attached. This defaults to "" (empty string),
which forces Packer to find an open device automatically.
* `mount_command` (string) - The command to use to mount devices. This
defaults to "mount". This may be useful to set if you want to set
environmental variables or perhaps run it with `sudo` or so on.
* `command_wrapper` (string) - How to run shell commands. This
defaults to "{{.Command}}". This may be useful to set if you want to set
environmental variables or perhaps run it with `sudo` or so on. This is a
configuration template where the `.Command` variable is replaced with the
command to be run..
* `mount_path` (string) - The path where the volume will be mounted. This is
where the chroot environment will be. This defaults to
@ -123,9 +125,6 @@ Optional:
* `tags` (object of key/value strings) - Tags applied to the AMI.
* `unmount_command` (string) - Just like `mount_command`, except this is
the command to unmount devices.
## Basic Example
Here is a basic example. It is completely valid except for the access keys:
@ -184,3 +183,37 @@ out of your AMI builds.
Packer properly obtains a process lock for the parallelism-sensitive parts
of its internals such as finding an available device.
## Using an IAM Instance Profile
If AWS keys are not specified in the template or through environment variables
Packer will use credentials provided by the instance's IAM profile, if it has one.
The following policy document provides the minimal set permissions necessary for Packer to work:
<pre class="prettyprint">
{
"Statement": [{
"Effect": "Allow",
"Action" : [
"ec2:AttachVolume",
"ec2:CreateVolume",
"ec2:DeleteVolume",
"ec2:DescribeVolumes",
"ec2:DetachVolume",
"ec2:DescribeInstances",
"ec2:CreateSnapshot",
"ec2:DeleteSnapshot",
"ec2:DescribeSnapshots",
"ec2:DescribeImages",
"ec2:RegisterImage",
"ec2:CreateTags"
],
"Resource" : "*"
}]
}
</pre>

View File

@ -62,7 +62,7 @@ Optional:
device mappings to the AMI. The block device mappings allow for keys:
"device\_name" (string), "virtual\_name" (string), "snapshot\_id" (string),
"volume\_type" (string), "volume\_size" (int), "delete\_on\_termination"
(bool), and "iops" (int).
(bool), "no\_device" (bool), and "iops" (int).
* `ami_description` (string) - The description to set for the resulting
AMI(s). By default this description is empty.

View File

@ -77,8 +77,8 @@ Optional:
device mappings to the AMI. The block device mappings allow for keys:
"device\_name" (string), "virtual\_name" (string), "snapshot\_id" (string),
"volume\_type" (string), "volume\_size" (int), "delete\_on\_termination"
(bool), and "iops" (int). See [amazon-ebs](/docs/builders/amazon-ebs.html)
for an example template.
(bool), "no\_device" (bool), and "iops" (int).
See [amazon-ebs](/docs/builders/amazon-ebs.html) for an example template.
* `ami_description` (string) - The description to set for the resulting
AMI(s). By default this description is empty.

View File

@ -35,11 +35,6 @@ Required:
Optional:
* `event_delay` (string) - The delay, as a duration string, before checking
the status of an event. DigitalOcean's current API has consistency issues
where events take time to appear after being created. This defaults to "5s"
and generally shouldn't have to be changed.
* `image_id` (int) - The ID of the base image to use. This is the image that
will be used to launch a new droplet and provision it. Defaults to "284203",
which happens to be "Ubuntu 12.04 x64 Server."

View File

@ -85,6 +85,10 @@ Optional:
* `format` (string) - Either "ovf" or "ova", this specifies the output
format of the exported virtual machine. This defaults to "ovf".
* `guest_additions_attach` (bool) - If this is true (defaults to "false"),
the guest additions ISO will be attached to the virtual machine as a CD
rather than uploaded as a raw ISO.
* `guest_additions_path` (string) - The path on the guest virtual machine
where the VirtualBox guest additions ISO will be uploaded. By default this
is "VBoxGuestAdditions.iso" which should upload into the login directory
@ -108,6 +112,10 @@ Optional:
how to optimize the virtual hardware to work best with that operating
system.
* `hard_drive_interface` (string) - The type of controller that the primary
hard drive is attached to, defaults to "ide". When set to "sata", the
drive is attached to an AHCI SATA controller.
* `headless` (bool) - Packer defaults to building VirtualBox
virtual machines by launching a GUI that shows the console of the
machine being built. When this value is set to true, the machine will
@ -187,7 +195,7 @@ Optional:
By default this is ".vbox_version", which will generally upload it into
the home directory.
* `vm_name` (string) - This is the name of the VMX file for the new virtual
* `vm_name` (string) - This is the name of the OVF file for the new virtual
machine, without the file extension. By default this is "packer-BUILDNAME",
where "BUILDNAME" is the name of the build.

View File

@ -27,7 +27,7 @@ Ubuntu to self-install. Still, the example serves to show the basic configuratio
<pre class="prettyprint">
{
"type": "vmware",
"iso_url": "http://releases.ubuntu.com/12.04/ubuntu-12.04.2-server-amd64.iso",
"iso_url": "http://old-releases.ubuntu.com/releases/precise/ubuntu-12.04.2-server-amd64.iso",
"iso_checksum": "af5f788aee1b32c4b2634734309cc9e9",
"iso_checksum_type": "md5",
"ssh_username": "packer",

View File

@ -161,9 +161,9 @@ on certain cache keys, and is given exclusive access to that key for the
duration of the lock. This locking mechanism allows multiple builders to
share cache data even though they're running in parallel.
For example, both the VMware and VirtualBox support downloading an operating
system ISO from the internet. Most of the time, this ISO is identical. The
locking mechanisms of the cache allow one of the builders to download it
For example, both the VMware and VirtualBox builders support downloading an
operating system ISO from the internet. Most of the time, this ISO is identical.
The locking mechanisms of the cache allow one of the builders to download it
only once, but allow both builders to share the downloaded file.
The [documentation for packer.Cache](#) is

View File

@ -42,6 +42,8 @@ usage: packer [--version] [--help] <command> [<args>]
Available commands are:
build build image(s) from template
fix fixes templates from old versions of packer
inspect see components of a template
validate check that a template is valid
```

View File

@ -32,11 +32,26 @@ The example below is fully functional and expects cookbooks in the
The reference of available configuration options is listed below. No
configuration is actually required, but at least `run_list` is recommended.
* `config_template` (string) - Path to a template that will be used for
the Chef configuration file. By default Packer only sets configuration
it needs to match the settings set in the provisioner configuration. If
you need to set configurations that the Packer provisioner doesn't support,
then you should use a custom configuration template. See the dedicated
"Chef Configuration" section below for more details.
* `cookbook_paths` (array of strings) - This is an array of paths to
"cookbooks" directories on your local filesystem. These will be uploaded
to the remote machine in the directory specified by the `staging_directory`.
By default, this is empty.
* `roles_path` (string) - The path to the "roles" directory on your local filesystem.
These will be uploaded to the remote machine in the directory specified by the
`staging_directory`. By default, this is empty.
* `data_bags_path` (string) - The path to the "data_bags" directory on your local filesystem.
These will be uploaded to the remote machine in the directory specified by the
`staging_directory`. By default, this is empty.
* `execute_command` (string) - The command used to execute Chef. This has
various [configuration template variables](/docs/templates/configuration-templates.html)
available. See below for more information.
@ -70,6 +85,25 @@ configuration is actually required, but at least `run_list` is recommended.
this folder. If the permissions are not correct, use a shell provisioner
prior to this to configure it properly.
## Chef Configuration
By default, Packer uses a simple Chef configuration file in order to set
the options specified for the provisioner. But Chef is a complex tool that
supports many configuration options. Packer allows you to specify a custom
configuration template if you'd like to set custom configurations.
The default value for the configuration template is:
```
cookbook_path [{{.CookbookPaths}}]
```
This template is a [configuration template](/docs/templates/configuration-templates.html)
and has a set of variables available to use:
* `CookbookPaths` is the set of cookbook paths ready to embedded directly
into a Ruby array to configure Chef.
## Execute Command
By default, Packer uses the following command (broken across multiple lines

Some files were not shown because too many files have changed in this diff Show More