Merge branch 'master' into 430

This commit is contained in:
Matthew Hooker 2013-09-26 01:04:42 -07:00
commit 36326ee8c2
43 changed files with 1293 additions and 208 deletions

View File

@ -1,22 +1,49 @@
## 0.3.8 (unreleased) ## 0.3.9 (unreleased)
FEATURES:
* provisioner/chef-solo: Ability to specify a custom Chef configuration
template.
IMPROVEMENTS:
* builder/amazon/*: Interrupts work while waiting for AMI to be ready.
BUG FIXES: BUG FIXES:
* builder/amazon/*: While waiting for AMI, will detect "failed" state. * core: default user variable values don't need to be strings. [GH-456]
* builder/amazon/*: Waiting for state will detect if the resource (AMI,
## 0.3.8 (September 22, 2013)
FEATURES:
* core: You can now specify `only` and `except` configurations on any
provisioner or post-processor to specify a list of builds that they
are valid for. [GH-438]
* builders/virtualbox: Guest additions can be attached rather than uploaded,
easier to handle for Windows guests. [GH-405]
* provisioner/chef-solo: Ability to specify a custom Chef configuration
template.
* provisioner/chef-solo: Roles and data bags support. [GH-348]
IMPROVEMENTS:
* core: User variables can now be used for integer, boolean, etc.
values. [GH-418]
* core: Plugins made with incompatible versions will no longer load.
* builder/amazon/all: Interrupts work while waiting for AMI to be ready.
* provisioner/shell: Script line-endings are automatically converted to
Unix-style line-endings. Can be disabled by setting "binary" to "true".
[GH-277]
BUG FIXES:
* core: Set TCP KeepAlives on internally created RPC connections so that
they don't die. [GH-416]
* builder/amazon/all: While waiting for AMI, will detect "failed" state.
* builder/amazon/all: Waiting for state will detect if the resource (AMI,
instance, etc.) disappears from under it. instance, etc.) disappears from under it.
* builder/amazon/instance: Exclude only contents of /tmp, not /tmp
itself. [GH-437]
* builder/amazon/instance: Make AccessKey/SecretKey available to bundle
command even when they come from the environment. [GH-434]
* builder/virtualbox: F1-F12 and delete scancodes now work. [GH-425] * builder/virtualbox: F1-F12 and delete scancodes now work. [GH-425]
* post-processor/vagrant: Override configurations properly work. [GH-426]
* provisioner/puppet-masterless: Fix failure case when both facter vars * provisioner/puppet-masterless: Fix failure case when both facter vars
are used and prevent_sudo. [GH-415] are used and prevent_sudo. [GH-415]
* provisioner/puppet-masterless: User variables now work properly in
manifest file and hiera path. [GH-448]
## 0.3.7 (September 9, 2013) ## 0.3.7 (September 9, 2013)

View File

@ -2,16 +2,22 @@ NO_COLOR=\033[0m
OK_COLOR=\033[32;01m OK_COLOR=\033[32;01m
ERROR_COLOR=\033[31;01m ERROR_COLOR=\033[31;01m
WARN_COLOR=\033[33;01m WARN_COLOR=\033[33;01m
DEPS = $(go list -f '{{range .TestImports}}{{.}} {{end}}' ./...)
all: deps all: deps
@mkdir -p bin/ @mkdir -p bin/
@echo "$(OK_COLOR)==> Building$(NO_COLOR)" @echo "$(OK_COLOR)==> Building$(NO_COLOR)"
@./scripts/build.sh @bash --norc -i ./scripts/build.sh
deps: deps:
@echo "$(OK_COLOR)==> Installing dependencies$(NO_COLOR)" @echo "$(OK_COLOR)==> Installing dependencies$(NO_COLOR)"
@go get -d -v ./... @go get -d -v ./...
@go list -f '{{range .TestImports}}{{.}} {{end}}' ./... | xargs -n1 go get -d @echo $(DEPS) | xargs -n1 go get -d
updatedeps:
@echo "$(OK_COLOR)==> Updating all dependencies$(NO_COLOR)"
@go get -d -v -u ./...
@echo $(DEPS) | xargs -n1 go get -d -u
clean: clean:
@rm -rf bin/ local/ pkg/ src/ website/.sass-cache website/build @rm -rf bin/ local/ pkg/ src/ website/.sass-cache website/build
@ -23,4 +29,4 @@ test: deps
@echo "$(OK_COLOR)==> Testing Packer...$(NO_COLOR)" @echo "$(OK_COLOR)==> Testing Packer...$(NO_COLOR)"
go test ./... go test ./...
.PHONY: all deps format test .PHONY: all clean deps format test updatedeps

View File

@ -18,7 +18,14 @@ type AccessConfig struct {
// Auth returns a valid aws.Auth object for access to AWS services, or // Auth returns a valid aws.Auth object for access to AWS services, or
// an error if the authentication couldn't be resolved. // an error if the authentication couldn't be resolved.
func (c *AccessConfig) Auth() (aws.Auth, error) { func (c *AccessConfig) Auth() (aws.Auth, error) {
return aws.GetAuth(c.AccessKey, c.SecretKey) auth, err := aws.GetAuth(c.AccessKey, c.SecretKey)
if err == nil {
// Store the accesskey and secret that we got...
c.AccessKey = auth.AccessKey
c.SecretKey = auth.SecretKey
}
return auth, err
} }
// Region returns the aws.Region object for access to AWS services, requesting // Region returns the aws.Region object for access to AWS services, requesting

View File

@ -83,7 +83,7 @@ func (b *Builder) Prepare(raws ...interface{}) error {
"-u {{.AccountId}} " + "-u {{.AccountId}} " +
"-c {{.CertPath}} " + "-c {{.CertPath}} " +
"-r {{.Architecture}} " + "-r {{.Architecture}} " +
"-e {{.PrivatePath}} " + "-e {{.PrivatePath}}/* " +
"-d {{.Destination}} " + "-d {{.Destination}} " +
"-p {{.Prefix}} " + "-p {{.Prefix}} " +
"--batch" "--batch"

View File

@ -38,7 +38,8 @@ func (s *stepCreateSSHKey) Run(state multistep.StateBag) multistep.StepAction {
state.Put("privateKey", string(pem.EncodeToMemory(&priv_blk))) state.Put("privateKey", string(pem.EncodeToMemory(&priv_blk)))
// Marshal the public key into SSH compatible format // Marshal the public key into SSH compatible format
pub := ssh.NewRSAPublicKey(&priv.PublicKey) // TODO properly handle the public key error
pub, _ := ssh.NewPublicKey(&priv.PublicKey)
pub_sshformat := string(ssh.MarshalAuthorizedKey(pub)) pub_sshformat := string(ssh.MarshalAuthorizedKey(pub))
// The name of the public key on DO // The name of the public key on DO

View File

@ -28,6 +28,7 @@ type config struct {
DiskSize uint `mapstructure:"disk_size"` DiskSize uint `mapstructure:"disk_size"`
FloppyFiles []string `mapstructure:"floppy_files"` FloppyFiles []string `mapstructure:"floppy_files"`
Format string `mapstructure:"format"` Format string `mapstructure:"format"`
GuestAdditionsAttach bool `mapstructure:"guest_additions_attach"`
GuestAdditionsPath string `mapstructure:"guest_additions_path"` GuestAdditionsPath string `mapstructure:"guest_additions_path"`
GuestAdditionsURL string `mapstructure:"guest_additions_url"` GuestAdditionsURL string `mapstructure:"guest_additions_url"`
GuestAdditionsSHA256 string `mapstructure:"guest_additions_sha256"` GuestAdditionsSHA256 string `mapstructure:"guest_additions_sha256"`
@ -361,6 +362,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
new(stepCreateVM), new(stepCreateVM),
new(stepCreateDisk), new(stepCreateDisk),
new(stepAttachISO), new(stepAttachISO),
new(stepAttachGuestAdditions),
new(stepAttachFloppy), new(stepAttachFloppy),
new(stepForwardSSH), new(stepForwardSSH),
new(stepVBoxManage), new(stepVBoxManage),

View File

@ -0,0 +1,81 @@
package virtualbox
import (
"fmt"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"log"
)
// This step attaches the VirtualBox guest additions as a inserted CD onto
// the virtual machine.
//
// Uses:
// config *config
// driver Driver
// guest_additions_path string
// ui packer.Ui
// vmName string
//
// Produces:
type stepAttachGuestAdditions struct {
attachedPath string
}
func (s *stepAttachGuestAdditions) Run(state multistep.StateBag) multistep.StepAction {
config := state.Get("config").(*config)
driver := state.Get("driver").(Driver)
guestAdditionsPath := state.Get("guest_additions_path").(string)
ui := state.Get("ui").(packer.Ui)
vmName := state.Get("vmName").(string)
// If we're not attaching the guest additions then just return
if !config.GuestAdditionsAttach {
log.Println("Not attaching guest additions since we're uploading.")
return multistep.ActionContinue
}
// Attach the guest additions to the computer
log.Println("Attaching guest additions ISO onto IDE controller...")
command := []string{
"storageattach", vmName,
"--storagectl", "IDE Controller",
"--port", "1",
"--device", "0",
"--type", "dvddrive",
"--medium", guestAdditionsPath,
}
if err := driver.VBoxManage(command...); err != nil {
err := fmt.Errorf("Error attaching guest additions: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
// Track the path so that we can unregister it from VirtualBox later
s.attachedPath = guestAdditionsPath
return multistep.ActionContinue
}
func (s *stepAttachGuestAdditions) Cleanup(state multistep.StateBag) {
if s.attachedPath == "" {
return
}
driver := state.Get("driver").(Driver)
ui := state.Get("ui").(packer.Ui)
vmName := state.Get("vmName").(string)
command := []string{
"storageattach", vmName,
"--storagectl", "IDE Controller",
"--port", "1",
"--device", "0",
"--medium", "none",
}
if err := driver.VBoxManage(command...); err != nil {
ui.Error(fmt.Sprintf("Error unregistering guest additions: %s", err))
}
}

View File

@ -4,6 +4,7 @@ import (
"fmt" "fmt"
"github.com/mitchellh/multistep" "github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer" "github.com/mitchellh/packer/packer"
"log"
"os" "os"
) )
@ -21,6 +22,12 @@ func (s *stepUploadGuestAdditions) Run(state multistep.StateBag) multistep.StepA
guestAdditionsPath := state.Get("guest_additions_path").(string) guestAdditionsPath := state.Get("guest_additions_path").(string)
ui := state.Get("ui").(packer.Ui) ui := state.Get("ui").(packer.Ui)
// If we're attaching then don't do this, since we attached.
if config.GuestAdditionsAttach {
log.Println("Not uploading guest additions since we're attaching.")
return multistep.ActionContinue
}
version, err := driver.Version() version, err := driver.Version()
if err != nil { if err != nil {
state.Put("error", fmt.Errorf("Error reading version for guest additions upload: %s", err)) state.Put("error", fmt.Errorf("Error reading version for guest additions upload: %s", err))

View File

@ -41,8 +41,9 @@ func CheckUnusedConfig(md *mapstructure.Metadata) *packer.MultiError {
func DecodeConfig(target interface{}, raws ...interface{}) (*mapstructure.Metadata, error) { func DecodeConfig(target interface{}, raws ...interface{}) (*mapstructure.Metadata, error) {
var md mapstructure.Metadata var md mapstructure.Metadata
decoderConfig := &mapstructure.DecoderConfig{ decoderConfig := &mapstructure.DecoderConfig{
Metadata: &md, Metadata: &md,
Result: target, Result: target,
WeaklyTypedInput: true,
} }
decoder, err := mapstructure.NewDecoder(decoderConfig) decoder, err := mapstructure.NewDecoder(decoderConfig)

View File

@ -60,9 +60,9 @@ func (k *SimpleKeychain) Key(i int) (ssh.PublicKey, error) {
} }
switch key := k.keys[i].(type) { switch key := k.keys[i].(type) {
case *rsa.PrivateKey: case *rsa.PrivateKey:
return ssh.NewRSAPublicKey(&key.PublicKey), nil return ssh.NewPublicKey(&key.PublicKey)
case *dsa.PrivateKey: case *dsa.PrivateKey:
return ssh.NewDSAPublicKey(&key.PublicKey), nil return ssh.NewPublicKey(&key.PublicKey)
} }
panic("unknown key type") panic("unknown key type")
} }

View File

@ -317,10 +317,24 @@ func (c *Client) Start() (address string, err error) {
err = errors.New("timeout while waiting for plugin to start") err = errors.New("timeout while waiting for plugin to start")
case <-exitCh: case <-exitCh:
err = errors.New("plugin exited before we could connect") err = errors.New("plugin exited before we could connect")
case line := <-linesCh: case lineBytes := <-linesCh:
// Trim the address and reset the err since we were able // Trim the line and split by "|" in order to get the parts of
// to read some sort of address. // the output.
c.address = strings.TrimSpace(string(line)) line := strings.TrimSpace(string(lineBytes))
parts := strings.SplitN(line, "|", 2)
if len(parts) < 2 {
err = fmt.Errorf("Unrecognized remote plugin message: %s", line)
return
}
// Test the API version
if parts[0] != APIVersion {
err = fmt.Errorf("Incompatible API version with plugin. "+
"Plugin version: %s, Ours: %s", parts[0], APIVersion)
return
}
c.address = parts[1]
address = c.address address = c.address
} }

View File

@ -37,6 +37,21 @@ func TestClient(t *testing.T) {
} }
} }
func TestClientStart_badVersion(t *testing.T) {
config := &ClientConfig{
Cmd: helperProcess("bad-version"),
StartTimeout: 50 * time.Millisecond,
}
c := NewClient(config)
defer c.Kill()
_, err := c.Start()
if err == nil {
t.Fatal("err should not be nil")
}
}
func TestClient_Start_Timeout(t *testing.T) { func TestClient_Start_Timeout(t *testing.T) {
config := &ClientConfig{ config := &ClientConfig{
Cmd: helperProcess("start-timeout"), Cmd: helperProcess("start-timeout"),

View File

@ -30,9 +30,16 @@ var Interrupts int32 = 0
const MagicCookieKey = "PACKER_PLUGIN_MAGIC_COOKIE" const MagicCookieKey = "PACKER_PLUGIN_MAGIC_COOKIE"
const MagicCookieValue = "d602bf8f470bc67ca7faa0386276bbdd4330efaf76d1a219cb4d6991ca9872b2" const MagicCookieValue = "d602bf8f470bc67ca7faa0386276bbdd4330efaf76d1a219cb4d6991ca9872b2"
// The APIVersion is outputted along with the RPC address. The plugin
// client validates this API version and will show an error if it doesn't
// know how to speak it.
const APIVersion = "1"
// This serves a single RPC connection on the given RPC server on // This serves a single RPC connection on the given RPC server on
// a random port. // a random port.
func serve(server *rpc.Server) (err error) { func serve(server *rpc.Server) (err error) {
log.Printf("Plugin build against Packer '%s'", packer.GitCommit)
if os.Getenv(MagicCookieKey) != MagicCookieValue { if os.Getenv(MagicCookieKey) != MagicCookieValue {
return errors.New("Please do not execute plugins directly. Packer will execute these for you.") return errors.New("Please do not execute plugins directly. Packer will execute these for you.")
} }
@ -75,7 +82,7 @@ func serve(server *rpc.Server) (err error) {
// Output the address to stdout // Output the address to stdout
log.Printf("Plugin address: %s\n", address) log.Printf("Plugin address: %s\n", address)
fmt.Println(address) fmt.Printf("%s|%s\n", APIVersion, address)
os.Stdout.Sync() os.Stdout.Sync()
// Accept a connection // Accept a connection

View File

@ -50,6 +50,9 @@ func TestHelperProcess(*testing.T) {
cmd, args := args[0], args[1:] cmd, args := args[0], args[1:]
switch cmd { switch cmd {
case "bad-version":
fmt.Printf("%s1|:1234\n", APIVersion)
<-make(chan int)
case "builder": case "builder":
ServeBuilder(new(helperBuilder)) ServeBuilder(new(helperBuilder))
case "command": case "command":
@ -59,7 +62,7 @@ func TestHelperProcess(*testing.T) {
case "invalid-rpc-address": case "invalid-rpc-address":
fmt.Println("lolinvalid") fmt.Println("lolinvalid")
case "mock": case "mock":
fmt.Println(":1234") fmt.Printf("%s|:1234\n", APIVersion)
<-make(chan int) <-make(chan int)
case "post-processor": case "post-processor":
ServePostProcessor(new(helperPostProcessor)) ServePostProcessor(new(helperPostProcessor))
@ -69,11 +72,11 @@ func TestHelperProcess(*testing.T) {
time.Sleep(1 * time.Minute) time.Sleep(1 * time.Minute)
os.Exit(1) os.Exit(1)
case "stderr": case "stderr":
fmt.Println(":1234") fmt.Printf("%s|:1234\n", APIVersion)
log.Println("HELLO") log.Println("HELLO")
log.Println("WORLD") log.Println("WORLD")
case "stdin": case "stdin":
fmt.Println(":1234") fmt.Printf("%s|:1234\n", APIVersion)
data := make([]byte, 5) data := make([]byte, 5)
if _, err := os.Stdin.Read(data); err != nil { if _, err := os.Stdin.Read(data); err != nil {
log.Printf("stdin read error: %s", err) log.Printf("stdin read error: %s", err)

View File

@ -52,7 +52,7 @@ func (b *build) Run(ui packer.Ui, cache packer.Cache) ([]packer.Artifact, error)
artifacts := make([]packer.Artifact, len(result)) artifacts := make([]packer.Artifact, len(result))
for i, addr := range result { for i, addr := range result {
client, err := rpc.Dial("tcp", addr) client, err := rpcDial(addr)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -92,7 +92,7 @@ func (b *BuildServer) Prepare(v map[string]string, reply *error) error {
} }
func (b *BuildServer) Run(args *BuildRunArgs, reply *[]string) error { func (b *BuildServer) Run(args *BuildRunArgs, reply *[]string) error {
client, err := rpc.Dial("tcp", args.UiRPCAddress) client, err := rpcDial(args.UiRPCAddress)
if err != nil { if err != nil {
return err return err
} }

View File

@ -5,7 +5,6 @@ import (
"fmt" "fmt"
"github.com/mitchellh/packer/packer" "github.com/mitchellh/packer/packer"
"log" "log"
"net"
"net/rpc" "net/rpc"
) )
@ -95,7 +94,7 @@ func (b *builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
return nil, nil return nil, nil
} }
client, err := rpc.Dial("tcp", response.RPCAddress) client, err := rpcDial(response.RPCAddress)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -119,12 +118,12 @@ func (b *BuilderServer) Prepare(args *BuilderPrepareArgs, reply *error) error {
} }
func (b *BuilderServer) Run(args *BuilderRunArgs, reply *interface{}) error { func (b *BuilderServer) Run(args *BuilderRunArgs, reply *interface{}) error {
client, err := rpc.Dial("tcp", args.RPCAddress) client, err := rpcDial(args.RPCAddress)
if err != nil { if err != nil {
return err return err
} }
responseC, err := net.Dial("tcp", args.ResponseAddress) responseC, err := tcpDial(args.ResponseAddress)
if err != nil { if err != nil {
return err return err
} }

View File

@ -66,7 +66,7 @@ func (c *CommandServer) Help(args *interface{}, reply *string) error {
} }
func (c *CommandServer) Run(args *CommandRunArgs, reply *int) error { func (c *CommandServer) Run(args *CommandRunArgs, reply *int) error {
client, err := rpc.Dial("tcp", args.RPCAddress) client, err := rpcDial(args.RPCAddress)
if err != nil { if err != nil {
return err return err
} }

View File

@ -177,7 +177,7 @@ func (c *CommunicatorServer) Start(args *CommunicatorStartArgs, reply *interface
toClose := make([]net.Conn, 0) toClose := make([]net.Conn, 0)
if args.StdinAddress != "" { if args.StdinAddress != "" {
stdinC, err := net.Dial("tcp", args.StdinAddress) stdinC, err := tcpDial(args.StdinAddress)
if err != nil { if err != nil {
return err return err
} }
@ -187,7 +187,7 @@ func (c *CommunicatorServer) Start(args *CommunicatorStartArgs, reply *interface
} }
if args.StdoutAddress != "" { if args.StdoutAddress != "" {
stdoutC, err := net.Dial("tcp", args.StdoutAddress) stdoutC, err := tcpDial(args.StdoutAddress)
if err != nil { if err != nil {
return err return err
} }
@ -197,7 +197,7 @@ func (c *CommunicatorServer) Start(args *CommunicatorStartArgs, reply *interface
} }
if args.StderrAddress != "" { if args.StderrAddress != "" {
stderrC, err := net.Dial("tcp", args.StderrAddress) stderrC, err := tcpDial(args.StderrAddress)
if err != nil { if err != nil {
return err return err
} }
@ -208,7 +208,7 @@ func (c *CommunicatorServer) Start(args *CommunicatorStartArgs, reply *interface
// Connect to the response address so we can write our result to it // Connect to the response address so we can write our result to it
// when ready. // when ready.
responseC, err := net.Dial("tcp", args.ResponseAddress) responseC, err := tcpDial(args.ResponseAddress)
if err != nil { if err != nil {
return err return err
} }
@ -234,7 +234,7 @@ func (c *CommunicatorServer) Start(args *CommunicatorStartArgs, reply *interface
} }
func (c *CommunicatorServer) Upload(args *CommunicatorUploadArgs, reply *interface{}) (err error) { func (c *CommunicatorServer) Upload(args *CommunicatorUploadArgs, reply *interface{}) (err error) {
readerC, err := net.Dial("tcp", args.ReaderAddress) readerC, err := tcpDial(args.ReaderAddress)
if err != nil { if err != nil {
return return
} }
@ -250,7 +250,7 @@ func (c *CommunicatorServer) UploadDir(args *CommunicatorUploadDirArgs, reply *e
} }
func (c *CommunicatorServer) Download(args *CommunicatorDownloadArgs, reply *interface{}) (err error) { func (c *CommunicatorServer) Download(args *CommunicatorDownloadArgs, reply *interface{}) (err error) {
writerC, err := net.Dial("tcp", args.WriterAddress) writerC, err := tcpDial(args.WriterAddress)
if err != nil { if err != nil {
return return
} }

33
packer/rpc/dial.go Normal file
View File

@ -0,0 +1,33 @@
package rpc
import (
"net"
"net/rpc"
)
// rpcDial makes a TCP connection to a remote RPC server and returns
// the client. This will set the connection up properly so that keep-alives
// are set and so on and should be used to make all RPC connections within
// this package.
func rpcDial(address string) (*rpc.Client, error) {
tcpConn, err := tcpDial(address)
if err != nil {
return nil, err
}
// Create an RPC client around our connection
return rpc.NewClient(tcpConn), nil
}
// tcpDial connects via TCP to the designated address.
func tcpDial(address string) (*net.TCPConn, error) {
conn, err := net.Dial("tcp", address)
if err != nil {
return nil, err
}
// Set a keep-alive so that the connection stays alive even when idle
tcpConn := conn.(*net.TCPConn)
tcpConn.SetKeepAlive(true)
return tcpConn, nil
}

View File

@ -28,7 +28,7 @@ func (e *Environment) Builder(name string) (b packer.Builder, err error) {
return return
} }
client, err := rpc.Dial("tcp", reply) client, err := rpcDial(reply)
if err != nil { if err != nil {
return return
} }
@ -43,7 +43,7 @@ func (e *Environment) Cache() packer.Cache {
panic(err) panic(err)
} }
client, err := rpc.Dial("tcp", reply) client, err := rpcDial(reply)
if err != nil { if err != nil {
panic(err) panic(err)
} }
@ -64,7 +64,7 @@ func (e *Environment) Hook(name string) (h packer.Hook, err error) {
return return
} }
client, err := rpc.Dial("tcp", reply) client, err := rpcDial(reply)
if err != nil { if err != nil {
return return
} }
@ -80,7 +80,7 @@ func (e *Environment) PostProcessor(name string) (p packer.PostProcessor, err er
return return
} }
client, err := rpc.Dial("tcp", reply) client, err := rpcDial(reply)
if err != nil { if err != nil {
return return
} }
@ -96,7 +96,7 @@ func (e *Environment) Provisioner(name string) (p packer.Provisioner, err error)
return return
} }
client, err := rpc.Dial("tcp", reply) client, err := rpcDial(reply)
if err != nil { if err != nil {
return return
} }
@ -109,7 +109,7 @@ func (e *Environment) Ui() packer.Ui {
var reply string var reply string
e.client.Call("Environment.Ui", new(interface{}), &reply) e.client.Call("Environment.Ui", new(interface{}), &reply)
client, err := rpc.Dial("tcp", reply) client, err := rpcDial(reply)
if err != nil { if err != nil {
panic(err) panic(err)
} }

View File

@ -46,7 +46,7 @@ func (h *hook) Cancel() {
} }
func (h *HookServer) Run(args *HookRunArgs, reply *interface{}) error { func (h *HookServer) Run(args *HookRunArgs, reply *interface{}) error {
client, err := rpc.Dial("tcp", args.RPCAddress) client, err := rpcDial(args.RPCAddress)
if err != nil { if err != nil {
return err return err
} }

View File

@ -57,7 +57,7 @@ func (p *postProcessor) PostProcess(ui packer.Ui, a packer.Artifact) (packer.Art
return nil, false, nil return nil, false, nil
} }
client, err := rpc.Dial("tcp", response.RPCAddress) client, err := rpcDial(response.RPCAddress)
if err != nil { if err != nil {
return nil, false, err return nil, false, err
} }
@ -75,7 +75,7 @@ func (p *PostProcessorServer) Configure(args *PostProcessorConfigureArgs, reply
} }
func (p *PostProcessorServer) PostProcess(address string, reply *PostProcessorProcessResponse) error { func (p *PostProcessorServer) PostProcess(address string, reply *PostProcessorProcessResponse) error {
client, err := rpc.Dial("tcp", address) client, err := rpcDial(address)
if err != nil { if err != nil {
return err return err
} }

View File

@ -65,7 +65,7 @@ func (p *ProvisionerServer) Prepare(args *ProvisionerPrepareArgs, reply *error)
} }
func (p *ProvisionerServer) Provision(args *ProvisionerProvisionArgs, reply *interface{}) error { func (p *ProvisionerServer) Provision(args *ProvisionerProvisionArgs, reply *interface{}) error {
client, err := rpc.Dial("tcp", args.RPCAddress) client, err := rpcDial(args.RPCAddress)
if err != nil { if err != nil {
return err return err
} }

View File

@ -48,6 +48,8 @@ type RawBuilderConfig struct {
// configuration. It contains the type of the post processor as well as the // configuration. It contains the type of the post processor as well as the
// raw configuration that is handed to the post-processor for it to process. // raw configuration that is handed to the post-processor for it to process.
type RawPostProcessorConfig struct { type RawPostProcessorConfig struct {
TemplateOnlyExcept `mapstructure:",squash"`
Type string Type string
KeepInputArtifact bool `mapstructure:"keep_input_artifact"` KeepInputArtifact bool `mapstructure:"keep_input_artifact"`
RawConfig map[string]interface{} RawConfig map[string]interface{}
@ -57,6 +59,8 @@ type RawPostProcessorConfig struct {
// It contains the type of the provisioner as well as the raw configuration // It contains the type of the provisioner as well as the raw configuration
// that is handed to the provisioner for it to process. // that is handed to the provisioner for it to process.
type RawProvisionerConfig struct { type RawProvisionerConfig struct {
TemplateOnlyExcept `mapstructure:",squash"`
Type string Type string
Override map[string]interface{} Override map[string]interface{}
@ -120,18 +124,25 @@ func ParseTemplate(data []byte) (t *Template, err error) {
// Gather all the variables // Gather all the variables
for k, v := range rawTpl.Variables { for k, v := range rawTpl.Variables {
var variable RawVariable var variable RawVariable
variable.Default = ""
variable.Required = v == nil variable.Required = v == nil
if v != nil { // Create a new mapstructure decoder in order to decode the default
def, ok := v.(string) // value since this is the only value in the regular template that
if !ok { // can be weakly typed.
errors = append(errors, decoder, err := mapstructure.NewDecoder(&mapstructure.DecoderConfig{
fmt.Errorf("variable '%s': default value must be string or null", k)) Result: &variable.Default,
continue WeaklyTypedInput: true,
} })
if err != nil {
// This should never happen.
panic(err)
}
variable.Default = def err = decoder.Decode(v)
if err != nil {
errors = append(errors,
fmt.Errorf("Error decoding default value for user var '%s': %s", k, err))
continue
} }
t.Variables[k] = variable t.Variables[k] = variable
@ -189,32 +200,50 @@ func ParseTemplate(data []byte) (t *Template, err error) {
continue continue
} }
t.PostProcessors[i] = make([]RawPostProcessorConfig, len(rawPP)) configs := make([]RawPostProcessorConfig, 0, len(rawPP))
configs := t.PostProcessors[i]
for j, pp := range rawPP { for j, pp := range rawPP {
config := &configs[j] var config RawPostProcessorConfig
if err := mapstructure.Decode(pp, config); err != nil { if err := mapstructure.Decode(pp, &config); err != nil {
if merr, ok := err.(*mapstructure.Error); ok { if merr, ok := err.(*mapstructure.Error); ok {
for _, err := range merr.Errors { for _, err := range merr.Errors {
errors = append(errors, fmt.Errorf("Post-processor #%d.%d: %s", i+1, j+1, err)) errors = append(errors,
fmt.Errorf("Post-processor #%d.%d: %s", i+1, j+1, err))
} }
} else { } else {
errors = append(errors, fmt.Errorf("Post-processor %d.%d: %s", i+1, j+1, err)) errors = append(errors,
fmt.Errorf("Post-processor %d.%d: %s", i+1, j+1, err))
} }
continue continue
} }
if config.Type == "" { if config.Type == "" {
errors = append(errors, fmt.Errorf("Post-processor %d.%d: missing 'type'", i+1, j+1)) errors = append(errors,
fmt.Errorf("Post-processor %d.%d: missing 'type'", i+1, j+1))
continue continue
} }
// Remove the input keep_input_artifact option // Remove the input keep_input_artifact option
config.TemplateOnlyExcept.Prune(pp)
delete(pp, "keep_input_artifact") delete(pp, "keep_input_artifact")
// Verify that the only settings are good
if errs := config.TemplateOnlyExcept.Validate(t.Builders); len(errs) > 0 {
for _, err := range errs {
errors = append(errors,
fmt.Errorf("Post-processor %d.%d: %s", i+1, j+1, err))
}
continue
}
config.RawConfig = pp config.RawConfig = pp
// Add it to the list of configs
configs = append(configs, config)
} }
t.PostProcessors[i] = configs
} }
// Gather all the provisioners // Gather all the provisioners
@ -237,9 +266,8 @@ func ParseTemplate(data []byte) (t *Template, err error) {
continue continue
} }
// The provisioners not only don't need or want the override settings // Delete the keys that we used
// (as they are processed as part of the preparation below), but will raw.TemplateOnlyExcept.Prune(v)
// actively reject them as invalid configuration.
delete(v, "override") delete(v, "override")
// Verify that the override keys exist... // Verify that the override keys exist...
@ -250,6 +278,14 @@ func ParseTemplate(data []byte) (t *Template, err error) {
} }
} }
// Verify that the only settings are good
if errs := raw.TemplateOnlyExcept.Validate(t.Builders); len(errs) > 0 {
for _, err := range errs {
errors = append(errors,
fmt.Errorf("provisioner %d: %s", i+1, err))
}
}
raw.RawConfig = v raw.RawConfig = v
} }
@ -400,8 +436,12 @@ func (t *Template) Build(name string, components *ComponentFinder) (b Build, err
// Prepare the post-processors // Prepare the post-processors
postProcessors := make([][]coreBuildPostProcessor, 0, len(t.PostProcessors)) postProcessors := make([][]coreBuildPostProcessor, 0, len(t.PostProcessors))
for _, rawPPs := range t.PostProcessors { for _, rawPPs := range t.PostProcessors {
current := make([]coreBuildPostProcessor, len(rawPPs)) current := make([]coreBuildPostProcessor, 0, len(rawPPs))
for i, rawPP := range rawPPs { for _, rawPP := range rawPPs {
if rawPP.TemplateOnlyExcept.Skip(name) {
continue
}
pp, err := components.PostProcessor(rawPP.Type) pp, err := components.PostProcessor(rawPP.Type)
if err != nil { if err != nil {
return nil, err return nil, err
@ -411,12 +451,18 @@ func (t *Template) Build(name string, components *ComponentFinder) (b Build, err
return nil, fmt.Errorf("PostProcessor type not found: %s", rawPP.Type) return nil, fmt.Errorf("PostProcessor type not found: %s", rawPP.Type)
} }
current[i] = coreBuildPostProcessor{ current = append(current, coreBuildPostProcessor{
processor: pp, processor: pp,
processorType: rawPP.Type, processorType: rawPP.Type,
config: rawPP.RawConfig, config: rawPP.RawConfig,
keepInputArtifact: rawPP.KeepInputArtifact, keepInputArtifact: rawPP.KeepInputArtifact,
} })
}
// If we have no post-processors in this chain, just continue.
// This can happen if the post-processors skip certain builds.
if len(current) == 0 {
continue
} }
postProcessors = append(postProcessors, current) postProcessors = append(postProcessors, current)
@ -425,6 +471,10 @@ func (t *Template) Build(name string, components *ComponentFinder) (b Build, err
// Prepare the provisioners // Prepare the provisioners
provisioners := make([]coreBuildProvisioner, 0, len(t.Provisioners)) provisioners := make([]coreBuildProvisioner, 0, len(t.Provisioners))
for _, rawProvisioner := range t.Provisioners { for _, rawProvisioner := range t.Provisioners {
if rawProvisioner.TemplateOnlyExcept.Skip(name) {
continue
}
var provisioner Provisioner var provisioner Provisioner
provisioner, err = components.Provisioner(rawProvisioner.Type) provisioner, err = components.Provisioner(rawProvisioner.Type)
if err != nil { if err != nil {
@ -471,3 +521,69 @@ func (t *Template) Build(name string, components *ComponentFinder) (b Build, err
return return
} }
// TemplateOnlyExcept contains the logic required for "only" and "except"
// meta-parameters.
type TemplateOnlyExcept struct {
Only []string
Except []string
}
// Prune will prune out the used values from the raw map.
func (t *TemplateOnlyExcept) Prune(raw map[string]interface{}) {
delete(raw, "except")
delete(raw, "only")
}
// Skip tests if we should skip putting this item onto a build.
func (t *TemplateOnlyExcept) Skip(name string) bool {
if len(t.Only) > 0 {
onlyFound := false
for _, n := range t.Only {
if n == name {
onlyFound = true
break
}
}
if !onlyFound {
// Skip this provisioner
return true
}
}
// If the name is in the except list, then skip that
for _, n := range t.Except {
if n == name {
return true
}
}
return false
}
// Validates the only/except parameters.
func (t *TemplateOnlyExcept) Validate(b map[string]RawBuilderConfig) (e []error) {
if len(t.Only) > 0 && len(t.Except) > 0 {
e = append(e,
fmt.Errorf("Only one of 'only' or 'except' may be specified."))
}
if len(t.Only) > 0 {
for _, n := range t.Only {
if _, ok := b[n]; !ok {
e = append(e,
fmt.Errorf("'only' specified builder '%s' not found", n))
}
}
}
for _, n := range t.Except {
if _, ok := b[n]; !ok {
e = append(e,
fmt.Errorf("'except' specified builder '%s' not found", n))
}
}
return
}

View File

@ -9,6 +9,33 @@ import (
"testing" "testing"
) )
func testTemplateComponentFinder() *ComponentFinder {
builder := testBuilder()
pp := new(TestPostProcessor)
provisioner := &MockProvisioner{}
builderMap := map[string]Builder{
"test-builder": builder,
}
ppMap := map[string]PostProcessor{
"test-pp": pp,
}
provisionerMap := map[string]Provisioner{
"test-prov": provisioner,
}
builderFactory := func(n string) (Builder, error) { return builderMap[n], nil }
ppFactory := func(n string) (PostProcessor, error) { return ppMap[n], nil }
provFactory := func(n string) (Provisioner, error) { return provisionerMap[n], nil }
return &ComponentFinder{
Builder: builderFactory,
PostProcessor: ppFactory,
Provisioner: provFactory,
}
}
func TestParseTemplateFile_basic(t *testing.T) { func TestParseTemplateFile_basic(t *testing.T) {
data := ` data := `
{ {
@ -364,7 +391,8 @@ func TestParseTemplate_Variables(t *testing.T) {
{ {
"variables": { "variables": {
"foo": "bar", "foo": "bar",
"bar": null "bar": null,
"baz": 27
}, },
"builders": [{"type": "something"}] "builders": [{"type": "something"}]
@ -376,7 +404,7 @@ func TestParseTemplate_Variables(t *testing.T) {
t.Fatalf("err: %s", err) t.Fatalf("err: %s", err)
} }
if result.Variables == nil || len(result.Variables) != 2 { if result.Variables == nil || len(result.Variables) != 3 {
t.Fatalf("bad vars: %#v", result.Variables) t.Fatalf("bad vars: %#v", result.Variables)
} }
@ -395,6 +423,14 @@ func TestParseTemplate_Variables(t *testing.T) {
if !result.Variables["bar"].Required { if !result.Variables["bar"].Required {
t.Fatal("bar should be required") t.Fatal("bar should be required")
} }
if result.Variables["baz"].Default != "27" {
t.Fatal("default should be empty")
}
if result.Variables["baz"].Required {
t.Fatal("baz should not be required")
}
} }
func TestParseTemplate_variablesBadDefault(t *testing.T) { func TestParseTemplate_variablesBadDefault(t *testing.T) {
@ -663,6 +699,386 @@ func TestTemplate_Build(t *testing.T) {
} }
} }
func TestTemplateBuild_exceptOnlyPP(t *testing.T) {
data := `
{
"builders": [
{
"name": "test1",
"type": "test-builder"
},
{
"name": "test2",
"type": "test-builder"
}
],
"post-processors": [
{
"type": "test-pp",
"except": ["test1"],
"only": ["test1"]
}
]
}
`
_, err := ParseTemplate([]byte(data))
if err == nil {
t.Fatal("should have error")
}
}
func TestTemplateBuild_exceptOnlyProv(t *testing.T) {
data := `
{
"builders": [
{
"name": "test1",
"type": "test-builder"
},
{
"name": "test2",
"type": "test-builder"
}
],
"provisioners": [
{
"type": "test-prov",
"except": ["test1"],
"only": ["test1"]
}
]
}
`
_, err := ParseTemplate([]byte(data))
if err == nil {
t.Fatal("should have error")
}
}
func TestTemplateBuild_exceptPPInvalid(t *testing.T) {
data := `
{
"builders": [
{
"name": "test1",
"type": "test-builder"
},
{
"name": "test2",
"type": "test-builder"
}
],
"post-processors": [
{
"type": "test-pp",
"except": ["test5"]
}
]
}
`
_, err := ParseTemplate([]byte(data))
if err == nil {
t.Fatal("should have error")
}
}
func TestTemplateBuild_exceptPP(t *testing.T) {
data := `
{
"builders": [
{
"name": "test1",
"type": "test-builder"
},
{
"name": "test2",
"type": "test-builder"
}
],
"post-processors": [
{
"type": "test-pp",
"except": ["test1"]
}
]
}
`
template, err := ParseTemplate([]byte(data))
if err != nil {
t.Fatalf("err: %s", err)
}
// Verify test1 has no post-processors
build, err := template.Build("test1", testTemplateComponentFinder())
if err != nil {
t.Fatalf("err: %s", err)
}
cbuild := build.(*coreBuild)
if len(cbuild.postProcessors) > 0 {
t.Fatal("should have no postProcessors")
}
// Verify test2 has no post-processors
build, err = template.Build("test2", testTemplateComponentFinder())
if err != nil {
t.Fatalf("err: %s", err)
}
cbuild = build.(*coreBuild)
if len(cbuild.postProcessors) != 1 {
t.Fatalf("invalid: %d", len(cbuild.postProcessors))
}
}
func TestTemplateBuild_exceptProvInvalid(t *testing.T) {
data := `
{
"builders": [
{
"name": "test1",
"type": "test-builder"
},
{
"name": "test2",
"type": "test-builder"
}
],
"provisioners": [
{
"type": "test-prov",
"except": ["test5"]
}
]
}
`
_, err := ParseTemplate([]byte(data))
if err == nil {
t.Fatal("should have error")
}
}
func TestTemplateBuild_exceptProv(t *testing.T) {
data := `
{
"builders": [
{
"name": "test1",
"type": "test-builder"
},
{
"name": "test2",
"type": "test-builder"
}
],
"provisioners": [
{
"type": "test-prov",
"except": ["test1"]
}
]
}
`
template, err := ParseTemplate([]byte(data))
if err != nil {
t.Fatalf("err: %s", err)
}
// Verify test1 has no provisioners
build, err := template.Build("test1", testTemplateComponentFinder())
if err != nil {
t.Fatalf("err: %s", err)
}
cbuild := build.(*coreBuild)
if len(cbuild.provisioners) > 0 {
t.Fatal("should have no provisioners")
}
// Verify test2 has no provisioners
build, err = template.Build("test2", testTemplateComponentFinder())
if err != nil {
t.Fatalf("err: %s", err)
}
cbuild = build.(*coreBuild)
if len(cbuild.provisioners) != 1 {
t.Fatalf("invalid: %d", len(cbuild.provisioners))
}
}
func TestTemplateBuild_onlyPPInvalid(t *testing.T) {
data := `
{
"builders": [
{
"name": "test1",
"type": "test-builder"
},
{
"name": "test2",
"type": "test-builder"
}
],
"post-processors": [
{
"type": "test-pp",
"only": ["test5"]
}
]
}
`
_, err := ParseTemplate([]byte(data))
if err == nil {
t.Fatal("should have error")
}
}
func TestTemplateBuild_onlyPP(t *testing.T) {
data := `
{
"builders": [
{
"name": "test1",
"type": "test-builder"
},
{
"name": "test2",
"type": "test-builder"
}
],
"post-processors": [
{
"type": "test-pp",
"only": ["test2"]
}
]
}
`
template, err := ParseTemplate([]byte(data))
if err != nil {
t.Fatalf("err: %s", err)
}
// Verify test1 has no post-processors
build, err := template.Build("test1", testTemplateComponentFinder())
if err != nil {
t.Fatalf("err: %s", err)
}
cbuild := build.(*coreBuild)
if len(cbuild.postProcessors) > 0 {
t.Fatal("should have no postProcessors")
}
// Verify test2 has no post-processors
build, err = template.Build("test2", testTemplateComponentFinder())
if err != nil {
t.Fatalf("err: %s", err)
}
cbuild = build.(*coreBuild)
if len(cbuild.postProcessors) != 1 {
t.Fatalf("invalid: %d", len(cbuild.postProcessors))
}
}
func TestTemplateBuild_onlyProvInvalid(t *testing.T) {
data := `
{
"builders": [
{
"name": "test1",
"type": "test-builder"
},
{
"name": "test2",
"type": "test-builder"
}
],
"provisioners": [
{
"type": "test-prov",
"only": ["test5"]
}
]
}
`
_, err := ParseTemplate([]byte(data))
if err == nil {
t.Fatal("should have error")
}
}
func TestTemplateBuild_onlyProv(t *testing.T) {
data := `
{
"builders": [
{
"name": "test1",
"type": "test-builder"
},
{
"name": "test2",
"type": "test-builder"
}
],
"provisioners": [
{
"type": "test-prov",
"only": ["test2"]
}
]
}
`
template, err := ParseTemplate([]byte(data))
if err != nil {
t.Fatalf("err: %s", err)
}
// Verify test1 has no provisioners
build, err := template.Build("test1", testTemplateComponentFinder())
if err != nil {
t.Fatalf("err: %s", err)
}
cbuild := build.(*coreBuild)
if len(cbuild.provisioners) > 0 {
t.Fatal("should have no provisioners")
}
// Verify test2 has no provisioners
build, err = template.Build("test2", testTemplateComponentFinder())
if err != nil {
t.Fatalf("err: %s", err)
}
cbuild = build.(*coreBuild)
if len(cbuild.provisioners) != 1 {
t.Fatalf("invalid: %d", len(cbuild.provisioners))
}
}
func TestTemplate_Build_ProvisionerOverride(t *testing.T) { func TestTemplate_Build_ProvisionerOverride(t *testing.T) {
assert := asserts.NewTestingAsserts(t, true) assert := asserts.NewTestingAsserts(t, true)

View File

@ -10,7 +10,7 @@ import (
var GitCommit string var GitCommit string
// The version of packer. // The version of packer.
const Version = "0.3.8" const Version = "0.3.9"
// Any pre-release marker for the version. If this is "" (empty string), // Any pre-release marker for the version. If this is "" (empty string),
// then it means that it is a final release. Otherwise, this is the // then it means that it is a final release. Otherwise, this is the

View File

@ -24,15 +24,12 @@ type Config struct {
} }
type PostProcessor struct { type PostProcessor struct {
config Config config Config
premade map[string]packer.PostProcessor premade map[string]packer.PostProcessor
rawConfigs []interface{} extraConfig map[string]interface{}
} }
func (p *PostProcessor) Configure(raws ...interface{}) error { func (p *PostProcessor) Configure(raws ...interface{}) error {
// Store the raw configs for usage later
p.rawConfigs = raws
_, err := common.DecodeConfig(&p.config, raws...) _, err := common.DecodeConfig(&p.config, raws...)
if err != nil { if err != nil {
return err return err
@ -45,10 +42,8 @@ func (p *PostProcessor) Configure(raws ...interface{}) error {
tpl.UserVars = p.config.PackerUserVars tpl.UserVars = p.config.PackerUserVars
// Defaults // Defaults
ppExtraConfig := make(map[string]interface{})
if p.config.OutputPath == "" { if p.config.OutputPath == "" {
p.config.OutputPath = "packer_{{ .BuildName }}_{{.Provider}}.box" p.config.OutputPath = "packer_{{ .BuildName }}_{{.Provider}}.box"
ppExtraConfig["output"] = p.config.OutputPath
} }
// Accumulate any errors // Accumulate any errors
@ -58,10 +53,18 @@ func (p *PostProcessor) Configure(raws ...interface{}) error {
errs, fmt.Errorf("Error parsing output template: %s", err)) errs, fmt.Errorf("Error parsing output template: %s", err))
} }
// Store the extra configuration for post-processors // Store extra configuration we'll send to each post-processor type
p.rawConfigs = append(p.rawConfigs, ppExtraConfig) p.extraConfig = make(map[string]interface{})
p.extraConfig["output"] = p.config.OutputPath
p.extraConfig["packer_build_name"] = p.config.PackerBuildName
p.extraConfig["packer_builder_type"] = p.config.PackerBuilderType
p.extraConfig["packer_debug"] = p.config.PackerDebug
p.extraConfig["packer_force"] = p.config.PackerForce
p.extraConfig["packer_user_variables"] = p.config.PackerUserVars
// TODO(mitchellh): Properly handle multiple raw configs // TODO(mitchellh): Properly handle multiple raw configs. This isn't
// very pressing at the moment because at the time of this comment
// only the first member of raws can contain the actual type-overrides.
var mapConfig map[string]interface{} var mapConfig map[string]interface{}
if err := mapstructure.Decode(raws[0], &mapConfig); err != nil { if err := mapstructure.Decode(raws[0], &mapConfig); err != nil {
errs = packer.MultiErrorAppend(errs, errs = packer.MultiErrorAppend(errs,
@ -71,18 +74,14 @@ func (p *PostProcessor) Configure(raws ...interface{}) error {
p.premade = make(map[string]packer.PostProcessor) p.premade = make(map[string]packer.PostProcessor)
for k, raw := range mapConfig { for k, raw := range mapConfig {
pp := keyToPostProcessor(k) pp, err := p.subPostProcessor(k, raw, p.extraConfig)
if pp == nil { if err != nil {
errs = packer.MultiErrorAppend(errs, err)
continue continue
} }
// Create the proper list of configurations if pp == nil {
ppConfigs := make([]interface{}, 0, len(p.rawConfigs)+1) continue
copy(ppConfigs, p.rawConfigs)
ppConfigs = append(ppConfigs, raw)
if err := pp.Configure(ppConfigs...); err != nil {
errs = packer.MultiErrorAppend(errs, err)
} }
p.premade[k] = pp p.premade[k] = pp
@ -106,13 +105,15 @@ func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (pac
pp, ok := p.premade[ppName] pp, ok := p.premade[ppName]
if !ok { if !ok {
log.Printf("Premade post-processor for '%s' not found. Creating.", ppName) log.Printf("Premade post-processor for '%s' not found. Creating.", ppName)
pp = keyToPostProcessor(ppName)
if pp == nil { var err error
return nil, false, fmt.Errorf("Vagrant box post-processor not found: %s", ppName) pp, err = p.subPostProcessor(ppName, nil, p.extraConfig)
if err != nil {
return nil, false, err
} }
if err := pp.Configure(p.rawConfigs...); err != nil { if pp == nil {
return nil, false, err return nil, false, fmt.Errorf("Vagrant box post-processor not found: %s", ppName)
} }
} }
@ -120,6 +121,21 @@ func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (pac
return pp.PostProcess(ui, artifact) return pp.PostProcess(ui, artifact)
} }
func (p *PostProcessor) subPostProcessor(key string, specific interface{}, extra map[string]interface{}) (packer.PostProcessor, error) {
pp := keyToPostProcessor(key)
if pp == nil {
return nil, nil
}
if err := pp.Configure(extra, specific); err != nil {
return nil, err
}
return pp, nil
}
// keyToPostProcessor maps a configuration key to the actual post-processor
// it will be configuring. This returns a new instance of that post-processor.
func keyToPostProcessor(key string) packer.PostProcessor { func keyToPostProcessor(key string) packer.PostProcessor {
switch key { switch key {
case "aws": case "aws":

View File

@ -20,6 +20,8 @@ type Config struct {
ConfigTemplate string `mapstructure:"config_template"` ConfigTemplate string `mapstructure:"config_template"`
CookbookPaths []string `mapstructure:"cookbook_paths"` CookbookPaths []string `mapstructure:"cookbook_paths"`
RolesPath string `mapstructure:"roles_path"`
DataBagsPath string `mapstructure:"data_bags_path"`
ExecuteCommand string `mapstructure:"execute_command"` ExecuteCommand string `mapstructure:"execute_command"`
InstallCommand string `mapstructure:"install_command"` InstallCommand string `mapstructure:"install_command"`
RemoteCookbookPaths []string `mapstructure:"remote_cookbook_paths"` RemoteCookbookPaths []string `mapstructure:"remote_cookbook_paths"`
@ -38,6 +40,14 @@ type Provisioner struct {
type ConfigTemplate struct { type ConfigTemplate struct {
CookbookPaths string CookbookPaths string
DataBagsPath string
RolesPath string
// Templates don't support boolean statements until Go 1.2. In the
// mean time, we do this.
// TODO(mitchellh): Remove when Go 1.2 is released
HasDataBagsPath bool
HasRolesPath bool
} }
type ExecuteTemplate struct { type ExecuteTemplate struct {
@ -83,6 +93,8 @@ func (p *Provisioner) Prepare(raws ...interface{}) error {
templates := map[string]*string{ templates := map[string]*string{
"config_template": &p.config.ConfigTemplate, "config_template": &p.config.ConfigTemplate,
"data_bags_path": &p.config.DataBagsPath,
"roles_path": &p.config.RolesPath,
"staging_dir": &p.config.StagingDir, "staging_dir": &p.config.StagingDir,
} }
@ -144,6 +156,24 @@ func (p *Provisioner) Prepare(raws ...interface{}) error {
} }
} }
if p.config.RolesPath != "" {
pFileInfo, err := os.Stat(p.config.RolesPath)
if err != nil || !pFileInfo.IsDir() {
errs = packer.MultiErrorAppend(
errs, fmt.Errorf("Bad roles path '%s': %s", p.config.RolesPath, err))
}
}
if p.config.DataBagsPath != "" {
pFileInfo, err := os.Stat(p.config.DataBagsPath)
if err != nil || !pFileInfo.IsDir() {
errs = packer.MultiErrorAppend(
errs, fmt.Errorf("Bad data bags path '%s': %s", p.config.DataBagsPath, err))
}
}
// Process the user variables within the JSON and set the JSON. // Process the user variables within the JSON and set the JSON.
// Do this early so that we can validate and show errors. // Do this early so that we can validate and show errors.
p.config.Json, err = p.processJsonUserVars() p.config.Json, err = p.processJsonUserVars()
@ -180,7 +210,23 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error {
cookbookPaths = append(cookbookPaths, targetPath) cookbookPaths = append(cookbookPaths, targetPath)
} }
configPath, err := p.createConfig(ui, comm, cookbookPaths) rolesPath := ""
if p.config.RolesPath != "" {
rolesPath := fmt.Sprintf("%s/roles", p.config.StagingDir)
if err := p.uploadDirectory(ui, comm, rolesPath, p.config.RolesPath); err != nil {
return fmt.Errorf("Error uploading roles: %s", err)
}
}
dataBagsPath := ""
if p.config.DataBagsPath != "" {
dataBagsPath := fmt.Sprintf("%s/data_bags", p.config.StagingDir)
if err := p.uploadDirectory(ui, comm, dataBagsPath, p.config.DataBagsPath); err != nil {
return fmt.Errorf("Error uploading data bags: %s", err)
}
}
configPath, err := p.createConfig(ui, comm, cookbookPaths, rolesPath, dataBagsPath)
if err != nil { if err != nil {
return fmt.Errorf("Error creating Chef config file: %s", err) return fmt.Errorf("Error creating Chef config file: %s", err)
} }
@ -217,7 +263,7 @@ func (p *Provisioner) uploadDirectory(ui packer.Ui, comm packer.Communicator, ds
return comm.UploadDir(dst, src, nil) return comm.UploadDir(dst, src, nil)
} }
func (p *Provisioner) createConfig(ui packer.Ui, comm packer.Communicator, localCookbooks []string) (string, error) { func (p *Provisioner) createConfig(ui packer.Ui, comm packer.Communicator, localCookbooks []string, rolesPath string, dataBagsPath string) (string, error) {
ui.Message("Creating configuration file 'solo.rb'") ui.Message("Creating configuration file 'solo.rb'")
cookbook_paths := make([]string, len(p.config.RemoteCookbookPaths)+len(localCookbooks)) cookbook_paths := make([]string, len(p.config.RemoteCookbookPaths)+len(localCookbooks))
@ -248,7 +294,11 @@ func (p *Provisioner) createConfig(ui packer.Ui, comm packer.Communicator, local
} }
configString, err := p.config.tpl.Process(tpl, &ConfigTemplate{ configString, err := p.config.tpl.Process(tpl, &ConfigTemplate{
CookbookPaths: strings.Join(cookbook_paths, ","), CookbookPaths: strings.Join(cookbook_paths, ","),
RolesPath: rolesPath,
DataBagsPath: dataBagsPath,
HasRolesPath: rolesPath != "",
HasDataBagsPath: dataBagsPath != "",
}) })
if err != nil { if err != nil {
return "", err return "", err
@ -399,5 +449,11 @@ func (p *Provisioner) processJsonUserVars() (map[string]interface{}, error) {
} }
var DefaultConfigTemplate = ` var DefaultConfigTemplate = `
cookbook_path [{{.CookbookPaths}}] cookbook_path [{{.CookbookPaths}}]
{{if .HasRolesPath}}
role_path "{{.RolesPath}}"
{{end}}
{{if .HasDataBagsPath}}
data_bag_path "{{.DataBagsPath}}"
{{end}}
` `

View File

@ -75,11 +75,25 @@ func TestProvisionerPrepare_cookbookPaths(t *testing.T) {
t.Fatalf("err: %s", err) t.Fatalf("err: %s", err)
} }
rolesPath, err := ioutil.TempDir("", "roles")
if err != nil {
t.Fatalf("err: %s", err)
}
dataBagsPath, err := ioutil.TempDir("", "data_bags")
if err != nil {
t.Fatalf("err: %s", err)
}
defer os.Remove(path1) defer os.Remove(path1)
defer os.Remove(path2) defer os.Remove(path2)
defer os.Remove(rolesPath)
defer os.Remove(dataBagsPath)
config := testConfig() config := testConfig()
config["cookbook_paths"] = []string{path1, path2} config["cookbook_paths"] = []string{path1, path2}
config["roles_path"] = rolesPath
config["data_bags_path"] = dataBagsPath
err = p.Prepare(config) err = p.Prepare(config)
if err != nil { if err != nil {
@ -93,6 +107,58 @@ func TestProvisionerPrepare_cookbookPaths(t *testing.T) {
if p.config.CookbookPaths[0] != path1 || p.config.CookbookPaths[1] != path2 { if p.config.CookbookPaths[0] != path1 || p.config.CookbookPaths[1] != path2 {
t.Fatalf("unexpected: %#v", p.config.CookbookPaths) t.Fatalf("unexpected: %#v", p.config.CookbookPaths)
} }
if p.config.RolesPath != rolesPath {
t.Fatalf("unexpected: %#v", p.config.RolesPath)
}
if p.config.DataBagsPath != dataBagsPath {
t.Fatalf("unexpected: %#v", p.config.DataBagsPath)
}
}
func TestProvisionerPrepare_dataBagsPath(t *testing.T) {
var p Provisioner
dataBagsPath, err := ioutil.TempDir("", "data_bags")
if err != nil {
t.Fatalf("err: %s", err)
}
defer os.Remove(dataBagsPath)
config := testConfig()
config["data_bags_path"] = dataBagsPath
err = p.Prepare(config)
if err != nil {
t.Fatalf("err: %s", err)
}
if p.config.DataBagsPath != dataBagsPath {
t.Fatalf("unexpected: %#v", p.config.DataBagsPath)
}
}
func TestProvisionerPrepare_rolesPath(t *testing.T) {
var p Provisioner
rolesPath, err := ioutil.TempDir("", "roles")
if err != nil {
t.Fatalf("err: %s", err)
}
defer os.Remove(rolesPath)
config := testConfig()
config["roles_path"] = rolesPath
err = p.Prepare(config)
if err != nil {
t.Fatalf("err: %s", err)
}
if p.config.RolesPath != rolesPath {
t.Fatalf("unexpected: %#v", p.config.RolesPath)
}
} }
func TestProvisionerPrepare_json(t *testing.T) { func TestProvisionerPrepare_json(t *testing.T) {

View File

@ -82,7 +82,9 @@ func (p *Provisioner) Prepare(raws ...interface{}) error {
// Templates // Templates
templates := map[string]*string{ templates := map[string]*string{
"staging_dir": &p.config.StagingDir, "hiera_config_path": &p.config.HieraConfigPath,
"manifest_file": &p.config.ManifestFile,
"staging_dir": &p.config.StagingDir,
} }
for n, ptr := range templates { for n, ptr := range templates {

View File

@ -8,6 +8,7 @@ import (
"fmt" "fmt"
"github.com/mitchellh/packer/common" "github.com/mitchellh/packer/common"
"github.com/mitchellh/packer/packer" "github.com/mitchellh/packer/packer"
"io"
"io/ioutil" "io/ioutil"
"log" "log"
"os" "os"
@ -20,6 +21,10 @@ const DefaultRemotePath = "/tmp/script.sh"
type config struct { type config struct {
common.PackerConfig `mapstructure:",squash"` common.PackerConfig `mapstructure:",squash"`
// If true, the script contains binary and line endings will not be
// converted from Windows to Unix-style.
Binary bool
// An inline script to execute. Multiple strings are all executed // An inline script to execute. Multiple strings are all executed
// in the context of a single shell. // in the context of a single shell.
Inline []string Inline []string
@ -259,6 +264,11 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error {
return err return err
} }
var r io.Reader = f
if !p.config.Binary {
r = &UnixReader{Reader: r}
}
if err := comm.Upload(p.config.RemotePath, f); err != nil { if err := comm.Upload(p.config.RemotePath, f); err != nil {
return fmt.Errorf("Error uploading script: %s", err) return fmt.Errorf("Error uploading script: %s", err)
} }

View File

@ -0,0 +1,88 @@
package shell
import (
"bufio"
"bytes"
"io"
"sync"
)
// UnixReader is a Reader implementation that automatically converts
// Windows line endings to Unix line endings.
type UnixReader struct {
Reader io.Reader
buf []byte
once sync.Once
scanner *bufio.Scanner
}
func (r *UnixReader) Read(p []byte) (n int, err error) {
// Create the buffered reader once
r.once.Do(func() {
r.scanner = bufio.NewScanner(r.Reader)
r.scanner.Split(scanUnixLine)
})
// If we have no data in our buffer, scan to the next token
if len(r.buf) == 0 {
if !r.scanner.Scan() {
err = r.scanner.Err()
if err == nil {
err = io.EOF
}
return 0, err
}
r.buf = r.scanner.Bytes()
}
// Write out as much data as we can to the buffer, storing the rest
// for the next read.
n = len(p)
if n > len(r.buf) {
n = len(r.buf)
}
copy(p, r.buf)
r.buf = r.buf[n:]
return
}
// scanUnixLine is a bufio.Scanner SplitFunc. It tokenizes on lines, but
// only returns unix-style lines. So even if the line is "one\r\n", the
// token returned will be "one\n".
func scanUnixLine(data []byte, atEOF bool) (advance int, token []byte, err error) {
if atEOF && len(data) == 0 {
return 0, nil, nil
}
if i := bytes.IndexByte(data, '\n'); i >= 0 {
// We have a new-line terminated line. Return the line with the newline
return i + 1, dropCR(data[0 : i+1]), nil
}
if atEOF {
// We have a final, non-terminated line
return len(data), dropCR(data), nil
}
if data[len(data)-1] != '\r' {
// We have a normal line, just let it tokenize
return len(data), data, nil
}
// We need more data
return 0, nil, nil
}
func dropCR(data []byte) []byte {
if len(data) > 0 && data[len(data)-2] == '\r' {
// Trim off the last byte and replace it with a '\n'
data = data[0 : len(data)-1]
data[len(data)-1] = '\n'
}
return data
}

View File

@ -0,0 +1,33 @@
package shell
import (
"bytes"
"io"
"testing"
)
func TestUnixReader_impl(t *testing.T) {
var raw interface{}
raw = new(UnixReader)
if _, ok := raw.(io.Reader); !ok {
t.Fatal("should be reader")
}
}
func TestUnixReader(t *testing.T) {
input := "one\r\ntwo\nthree\r\n"
expected := "one\ntwo\nthree\n"
r := &UnixReader{
Reader: bytes.NewReader([]byte(input)),
}
result := new(bytes.Buffer)
if _, err := io.Copy(result, r); err != nil {
t.Fatalf("err: %s", err)
}
if result.String() != expected {
t.Fatalf("bad: %#v", result.String())
}
}

View File

@ -26,6 +26,9 @@ if [ "$(go env GOOS)" = "windows" ]; then
EXTENSION=".exe" EXTENSION=".exe"
fi fi
# Make sure that if we're killed, we kill all our subprocseses
trap "kill 0" SIGINT SIGTERM EXIT
# If we're building a race-enabled build, then set that up. # If we're building a race-enabled build, then set that up.
if [ ! -z $PACKER_RACE ]; then if [ ! -z $PACKER_RACE ]; then
echo -e "${OK_COLOR}--> Building with race detection enabled${NO_COLOR}" echo -e "${OK_COLOR}--> Building with race detection enabled${NO_COLOR}"
@ -35,21 +38,57 @@ fi
echo -e "${OK_COLOR}--> Installing dependencies to speed up builds...${NO_COLOR}" echo -e "${OK_COLOR}--> Installing dependencies to speed up builds...${NO_COLOR}"
go get ./... go get ./...
# This function waits for all background tasks to complete
waitAll() {
RESULT=0
for job in `jobs -p`; do
wait $job
if [ $? -ne 0 ]; then
RESULT=1
fi
done
if [ $RESULT -ne 0 ]; then
exit $RESULT
fi
}
waitSingle() {
if [ ! -z $PACKER_NO_BUILD_PARALLEL ]; then
waitAll
fi
}
if [ -z $PACKER_NO_BUILD_PARALLEL ]; then
echo -e "${OK_COLOR}--> NOTE: Compilation of components " \
"will be done in parallel.${NO_COLOR}"
fi
# Compile the main Packer app # Compile the main Packer app
echo -e "${OK_COLOR}--> Compiling Packer${NO_COLOR}" echo -e "${OK_COLOR}--> Compiling Packer${NO_COLOR}"
(
go build \ go build \
${PACKER_RACE} \ ${PACKER_RACE} \
-ldflags "-X github.com/mitchellh/packer/packer.GitCommit ${GIT_COMMIT}${GIT_DIRTY}" \ -ldflags "-X github.com/mitchellh/packer/packer.GitCommit ${GIT_COMMIT}${GIT_DIRTY}" \
-v \ -v \
-o bin/packer${EXTENSION} . -o bin/packer${EXTENSION} .
) &
waitSingle
# Go over each plugin and build it # Go over each plugin and build it
for PLUGIN in $(find ./plugin -mindepth 1 -maxdepth 1 -type d); do for PLUGIN in $(find ./plugin -mindepth 1 -maxdepth 1 -type d); do
PLUGIN_NAME=$(basename ${PLUGIN}) PLUGIN_NAME=$(basename ${PLUGIN})
echo -e "${OK_COLOR}--> Compiling Plugin: ${PLUGIN_NAME}${NO_COLOR}" echo -e "${OK_COLOR}--> Compiling Plugin: ${PLUGIN_NAME}${NO_COLOR}"
(
go build \ go build \
${PACKER_RACE} \ ${PACKER_RACE} \
-ldflags "-X github.com/mitchellh/packer/packer.GitCommit ${GIT_COMMIT}${GIT_DIRTY}" \ -ldflags "-X github.com/mitchellh/packer/packer.GitCommit ${GIT_COMMIT}${GIT_DIRTY}" \
-v \ -v \
-o bin/packer-${PLUGIN_NAME}${EXTENSION} ${PLUGIN} -o bin/packer-${PLUGIN_NAME}${EXTENSION} ${PLUGIN}
) &
waitSingle
done done
waitAll

View File

@ -2,11 +2,11 @@ source 'https://rubygems.org'
ruby '1.9.3' ruby '1.9.3'
gem "middleman", "~> 3.0.6" gem "middleman", "~> 3.1.5"
gem "middleman-minify-html", "~> 3.0.0" gem "middleman-minify-html", "~> 3.1.1"
gem "rack-contrib", "~> 1.1.0" gem "rack-contrib", "~> 1.1.0"
gem "redcarpet", "~> 2.2.2" gem "redcarpet", "~> 3.0.0"
gem "therubyracer", "~> 0.10.2" gem "therubyracer", "~> 0.12.0"
gem "thin", "~> 1.5.0" gem "thin", "~> 1.5.0"
group :development do group :development do

View File

@ -1,134 +1,109 @@
GEM GEM
remote: https://rubygems.org/ remote: https://rubygems.org/
specs: specs:
POpen4 (0.1.4) activesupport (3.2.14)
Platform (>= 0.4.0) i18n (~> 0.6, >= 0.6.4)
open4
Platform (0.4.0)
activesupport (3.2.9)
i18n (~> 0.6)
multi_json (~> 1.0) multi_json (~> 1.0)
chunky_png (1.2.6) chunky_png (1.2.8)
coffee-script (2.2.0) coffee-script (2.2.0)
coffee-script-source coffee-script-source
execjs execjs
coffee-script-source (1.3.3) coffee-script-source (1.6.3)
compass (0.12.2) compass (0.12.2)
chunky_png (~> 1.2) chunky_png (~> 1.2)
fssm (>= 0.2.7) fssm (>= 0.2.7)
sass (~> 3.1) sass (~> 3.1)
daemons (1.1.9) daemons (1.1.9)
eventmachine (1.0.0) eventmachine (1.0.3)
execjs (1.4.0) execjs (1.4.0)
multi_json (~> 1.0) multi_json (~> 1.0)
ffi (1.2.0) ffi (1.9.0)
fssm (0.2.9) fssm (0.2.10)
haml (3.1.7) haml (4.0.3)
highline (1.6.15) tilt
hike (1.2.1) highline (1.6.19)
htmlcompressor (0.0.3) hike (1.2.3)
yui-compressor (~> 0.9.6) i18n (0.6.5)
http_router (0.10.2) kramdown (1.1.0)
rack (>= 1.0.0) libv8 (3.16.14.3)
url_mount (~> 0.2.1) listen (1.2.3)
i18n (0.6.1) rb-fsevent (>= 0.9.3)
libv8 (3.3.10.4) rb-inotify (>= 0.9)
listen (0.5.3) rb-kqueue (>= 0.2)
maruku (0.6.1) middleman (3.1.5)
syntax (>= 1.0.0)
middleman (3.0.6)
middleman-core (= 3.0.6)
middleman-more (= 3.0.6)
middleman-sprockets (~> 3.0.2)
middleman-core (3.0.6)
activesupport (~> 3.2.6)
bundler (~> 1.1)
listen (~> 0.5.2)
rack (~> 1.4.1)
rack-test (~> 0.6.1)
rb-fsevent (~> 0.9.1)
rb-inotify (~> 0.8.8)
thor (~> 0.15.4)
tilt (~> 1.3.1)
middleman-minify-html (3.0.0)
htmlcompressor
middleman-core (~> 3.0.0)
middleman-more (3.0.6)
coffee-script (~> 2.2.0) coffee-script (~> 2.2.0)
coffee-script-source (~> 1.3.3)
compass (>= 0.12.2) compass (>= 0.12.2)
execjs (~> 1.4.0) execjs (~> 1.4.0)
haml (>= 3.1.6) haml (>= 3.1.6)
i18n (~> 0.6.0) kramdown (~> 1.1.0)
maruku (~> 0.6.0) middleman-core (= 3.1.5)
middleman-core (= 3.0.6) middleman-more (= 3.1.5)
padrino-helpers (= 0.10.7) middleman-sprockets (>= 3.1.2)
sass (>= 3.1.20) sass (>= 3.1.20)
uglifier (~> 1.2.6) uglifier (~> 2.1.0)
middleman-sprockets (3.0.4) middleman-core (3.1.5)
middleman-more (~> 3.0.1) activesupport (~> 3.2.6)
sprockets (~> 2.1, < 2.5) bundler (~> 1.1)
sprockets-sass (~> 0.8.0) i18n (~> 0.6.1)
multi_json (1.4.0) listen (~> 1.2.2)
open4 (1.3.0) rack (>= 1.4.5)
padrino-core (0.10.7) rack-test (~> 0.6.1)
activesupport (~> 3.2.0) thor (>= 0.15.2, < 2.0)
http_router (~> 0.10.2) tilt (~> 1.3.6)
sinatra (~> 1.3.1) middleman-minify-html (3.1.1)
thor (~> 0.15.2) middleman-core (~> 3.0)
tilt (~> 1.3.0) middleman-more (3.1.5)
padrino-helpers (0.10.7) middleman-sprockets (3.1.4)
i18n (~> 0.6) middleman-core (>= 3.0.14)
padrino-core (= 0.10.7) middleman-more (>= 3.0.14)
rack (1.4.1) sprockets (~> 2.1)
sprockets-helpers (~> 1.0.0)
sprockets-sass (~> 1.0.0)
multi_json (1.8.0)
rack (1.5.2)
rack-contrib (1.1.0) rack-contrib (1.1.0)
rack (>= 0.9.1) rack (>= 0.9.1)
rack-protection (1.2.0)
rack
rack-test (0.6.2) rack-test (0.6.2)
rack (>= 1.0) rack (>= 1.0)
rb-fsevent (0.9.2) rb-fsevent (0.9.3)
rb-inotify (0.8.8) rb-inotify (0.9.2)
ffi (>= 0.5.0) ffi (>= 0.5.0)
redcarpet (2.2.2) rb-kqueue (0.2.0)
sass (3.2.3) ffi (>= 0.5.0)
sinatra (1.3.3) redcarpet (3.0.0)
rack (~> 1.3, >= 1.3.6) ref (1.0.5)
rack-protection (~> 1.2) sass (3.2.10)
tilt (~> 1.3, >= 1.3.3) sprockets (2.10.0)
sprockets (2.4.5)
hike (~> 1.2) hike (~> 1.2)
multi_json (~> 1.0) multi_json (~> 1.0)
rack (~> 1.0) rack (~> 1.0)
tilt (~> 1.1, != 1.3.0) tilt (~> 1.1, != 1.3.0)
sprockets-sass (0.8.0) sprockets-helpers (1.0.1)
sprockets (~> 2.0)
sprockets-sass (1.0.1)
sprockets (~> 2.0) sprockets (~> 2.0)
tilt (~> 1.1) tilt (~> 1.1)
syntax (1.0.0) therubyracer (0.12.0)
therubyracer (0.10.2) libv8 (~> 3.16.14.0)
libv8 (~> 3.3.10) ref
thin (1.5.0) thin (1.5.1)
daemons (>= 1.0.9) daemons (>= 1.0.9)
eventmachine (>= 0.12.6) eventmachine (>= 0.12.6)
rack (>= 1.0.0) rack (>= 1.0.0)
thor (0.15.4) thor (0.18.1)
tilt (1.3.3) tilt (1.3.7)
uglifier (1.2.7) uglifier (2.1.2)
execjs (>= 0.3.0) execjs (>= 0.3.0)
multi_json (~> 1.3) multi_json (~> 1.0, >= 1.0.2)
url_mount (0.2.1)
rack
yui-compressor (0.9.6)
POpen4 (>= 0.1.4)
PLATFORMS PLATFORMS
ruby ruby
DEPENDENCIES DEPENDENCIES
highline (~> 1.6.15) highline (~> 1.6.15)
middleman (~> 3.0.6) middleman (~> 3.1.5)
middleman-minify-html (~> 3.0.0) middleman-minify-html (~> 3.1.1)
rack-contrib (~> 1.1.0) rack-contrib (~> 1.1.0)
redcarpet (~> 2.2.2) redcarpet (~> 3.0.0)
therubyracer (~> 0.10.2) therubyracer (~> 0.12.0)
thin (~> 1.5.0) thin (~> 1.5.0)

View File

@ -85,6 +85,10 @@ Optional:
* `format` (string) - Either "ovf" or "ova", this specifies the output * `format` (string) - Either "ovf" or "ova", this specifies the output
format of the exported virtual machine. This defaults to "ovf". format of the exported virtual machine. This defaults to "ovf".
* `guest_additions_attach` (bool) - If this is true (defaults to "false"),
the guest additions ISO will be attached to the virtual machine as a CD
rather than uploaded as a raw ISO.
* `guest_additions_path` (string) - The path on the guest virtual machine * `guest_additions_path` (string) - The path on the guest virtual machine
where the VirtualBox guest additions ISO will be uploaded. By default this where the VirtualBox guest additions ISO will be uploaded. By default this
is "VBoxGuestAdditions.iso" which should upload into the login directory is "VBoxGuestAdditions.iso" which should upload into the login directory

View File

@ -44,6 +44,14 @@ configuration is actually required, but at least `run_list` is recommended.
to the remote machine in the directory specified by the `staging_directory`. to the remote machine in the directory specified by the `staging_directory`.
By default, this is empty. By default, this is empty.
* `roles_path` (string) - The path to the "roles" directory on your local filesystem.
These will be uploaded to the remote machine in the directory specified by the
`staging_directory`. By default, this is empty.
* `data_bags_path` (string) - The path to the "data_bags" directory on your local filesystem.
These will be uploaded to the remote machine in the directory specified by the
`staging_directory`. By default, this is empty.
* `execute_command` (string) - The command used to execute Chef. This has * `execute_command` (string) - The command used to execute Chef. This has
various [configuration template variables](/docs/templates/configuration-templates.html) various [configuration template variables](/docs/templates/configuration-templates.html)
available. See below for more information. available. See below for more information.

View File

@ -79,7 +79,7 @@ By default, Packer uses the following command (broken across multiple lines
for readability) to execute Puppet: for readability) to execute Puppet:
``` ```
{{.FacterVars}}{{if .Sudo} sudo -E {{end}}puppet apply \ {{.FacterVars}}{{if .Sudo}} sudo -E {{end}}puppet apply \
--verbose \ --verbose \
--modulepath='{{.ModulePath}}' \ --modulepath='{{.ModulePath}}' \
{{if .HasHieraConfigPath}}--hiera_config='{{.HieraConfigPath}}' {{end}} \ {{if .HasHieraConfigPath}}--hiera_config='{{.HieraConfigPath}}' {{end}} \

View File

@ -47,6 +47,10 @@ Exactly _one_ of the following is required:
Optional parameters: Optional parameters:
* `binary` (boolean) - If true, specifies that the script(s) are binary
files, and Packer should therefore not convert Windows line endings to
Unix line endings (if there are any). By default this is false.
* `environment_vars` (array of strings) - An array of key/value pairs * `environment_vars` (array of strings) - An array of key/value pairs
to inject prior to the execute_command. The format should be to inject prior to the execute_command. The format should be
`key=value`. Packer injects some environmental variables by default `key=value`. Packer injects some environmental variables by default
@ -98,7 +102,7 @@ root privileges without worrying about password prompts.
## Default Environmental Variables ## Default Environmental Variables
In addition to being able to specify custom environmental variables using In addition to being able to specify custom environmental variables using
the `environmental_vars` configuration, the provisioner automatically the `environment_vars` configuration, the provisioner automatically
defines certain commonly useful environmental variables: defines certain commonly useful environmental variables:
* `PACKER_BUILD_NAME` is set to the name of the build that Packer is running. * `PACKER_BUILD_NAME` is set to the name of the build that Packer is running.

View File

@ -125,3 +125,28 @@ The answer is no, of course not. Packer is smart enough to figure out
that at least one post-processor requested that the input be kept, so it will keep that at least one post-processor requested that the input be kept, so it will keep
it around. it around.
</div> </div>
## Run on Specific Builds
You can use the `only` or `except` configurations to run a post-processor
only with specific builds. These two configurations do what you expect:
`only` will only run the post-processor on the specified builds and
`except` will run the post-processor on anything other than the specified
builds.
An example of `only` being used is shown below, but the usage of `except`
is effectively the same. `only` and `except` can only be specified on "detailed"
configurations. If you have a sequence of post-processors to run, `only`
and `except` will only affect that single post-processor in the sequence.
<pre class="prettyprint">
{
"type": "vagrant",
"only": ["virtualbox"]
}
</pre>
The values within `only` or `except` are _build names_, not builder
types. If you recall, build names by default are just their builder type,
but if you specify a custom `name` parameter, then you should use that
as the value instead of the type.

View File

@ -53,6 +53,30 @@ provisioner to run a local script within the machines:
} }
</pre> </pre>
## Run on Specific Builds
You can use the `only` or `except` configurations to run a provisioner
only with specific builds. These two configurations do what you expect:
`only` will only run the provisioner on the specified builds and
`except` will run the provisioner on anything other than the specified
builds.
An example of `only` being used is shown below, but the usage of `except`
is effectively the same:
<pre class="prettyprint">
{
"type": "shell",
"script": "script.sh",
"only": ["virtualbox"]
}
</pre>
The values within `only` or `except` are _build names_, not builder
types. If you recall, build names by default are just their builder type,
but if you specify a custom `name` parameter, then you should use that
as the value instead of the type.
## Build-Specific Overrides ## Build-Specific Overrides
While the goal of Packer is to produce identical machine images, it While the goal of Packer is to produce identical machine images, it

View File

@ -54,7 +54,7 @@ validation will fail.
Using the variables is extremely easy. Variables are used by calling Using the variables is extremely easy. Variables are used by calling
the user function in the form of <code>{{user &#96;variable&#96;}}</code>. the user function in the form of <code>{{user &#96;variable&#96;}}</code>.
This function can be used in _any string_ within the template, in This function can be used in _any value_ within the template, in
builders, provisioners, _anything_. The user variable is available globally builders, provisioners, _anything_. The user variable is available globally
within the template. within the template.