Merge remote-tracking branch 'upstream/master' into provisioner-chef-solo

Conflicts:
	config.go

Add the chef-solo provisioner back to config.go.
Fix import path for chef-solo provisioner.
This commit is contained in:
James Van Dyke 2013-07-10 08:58:00 -04:00
commit 355fdecafa
51 changed files with 1345 additions and 250 deletions

View File

@ -1,6 +1,47 @@
## 0.1.5 (unreleased) ## 0.2.0 (unreleased)
FEATURES:
* VirtualBox and VMware can now have `floppy_files` specified to attach
floppy disks when booting. This allows for unattended Windows installs.
BUG FIXES:
* core: UI messages are now properly prefixed with spaces again.
* virtualbox: "paused" doesn't mean the VM is stopped, improving
shutdown detection.
## 0.1.5 (July 7, 2013)
FEATURES:
* "file" uploader will upload files from the machine running Packer to the
remote machine.
* VirtualBox guest additions URL and checksum can now be specified, allowing
the VirtualBox builder to have the ability to be used completely offline.
IMPROVEMENTS:
* core: If SCP is not available, a more descriptive error message
is shown telling the user. [GH-127]
* shell: Scripts are now executed by default according to their shebang,
not with `/bin/sh`. [GH-105]
* shell: You can specify what interpreter you want inline scripts to
run with `inline_shebang`.
* virtualbox: Delete the packer-made SSH port forwarding prior to
exporting the VM.
BUG FIXES:
* core: Non-200 response codes on downloads now show proper errors.
[GH-141]
* amazon-ebs: SSH handshake is retried. [GH-130]
* vagrant: The `BuildName` template propery works properly in
the output path.
* vagrant: Properly configure the provider-specific post-processors so
things like `vagrantfile_template` work. [GH-129]
* vagrant: Close filehandles when copying files so Windows can
rename files. [GH-100]
## 0.1.4 (July 2, 2013) ## 0.1.4 (July 2, 2013)

43
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,43 @@
# Contributing to Packer
**First:** if you're unsure or afraid of _anything_, just ask
or submit the issue or pull request anyways. You won't be yelled at for
giving your best effort. The worst that can happen is that you'll be
politely asked to change something. We appreciate any sort of contributions,
and don't want a wall of rules to get in the way of that.
However, for those individuals who want a bit more guidance on the
best way to contribute to the project, read on. This document will cover
what we're looking for. By addressing all the points we're looking for,
it raises the chances we can quickly merge or address your contributions.
## Issues
### Reporting an Issue
* Make sure you test against the latest released version. It is possible
we already fixed the bug you're experiencing.
* Provide a reproducible test case. If a contributor can't reproduce an
issue, then it dramatically lowers the chances it'll get fixed. And in
some cases, the issue will eventually be closed.
* Respond promptly to any questions made by the Packer team to your issue.
Stale issues will be closed.
### Issue Lifecycle
1. The issue is reported.
2. The issue is verified and categorized by a Packer collaborator.
Categorization is done via tags. For example, bugs are marked as "bugs"
and easy fixes are marked as "easy".
3. Unless it is critical, the issue is left for a period of time (sometimes
many weeks), giving outside contributors a chance to address the issue.
4. The issue is addressed in a pull request or commit. The issue will be
referenced in the commit message so that the code that fixes it is clearly
linked.
5. The issue is closed.

View File

@ -77,8 +77,8 @@ For some additional dependencies, Go needs [Mercurial](http://mercurial.selenic.
to be installed. Packer itself doesn't require this but a dependency of a to be installed. Packer itself doesn't require this but a dependency of a
dependency does. dependency does.
Next, clone this repository then just type `make`. In a few moments, Next, clone this repository into `$GOPATH/src/github.com/mitchellh/packer` and
you'll have a working `packer` executable: then just type `make`. In a few moments, you'll have a working `packer` executable:
``` ```
$ make $ make

View File

@ -14,10 +14,64 @@ import (
) )
type stepConnectSSH struct { type stepConnectSSH struct {
conn net.Conn cancel bool
conn net.Conn
} }
func (s *stepConnectSSH) Run(state map[string]interface{}) multistep.StepAction { func (s *stepConnectSSH) Run(state map[string]interface{}) multistep.StepAction {
config := state["config"].(config)
ui := state["ui"].(packer.Ui)
var comm packer.Communicator
var err error
waitDone := make(chan bool, 1)
go func() {
comm, err = s.waitForSSH(state)
waitDone <- true
}()
log.Printf("Waiting for SSH, up to timeout: %s", config.SSHTimeout.String())
timeout := time.After(config.SSHTimeout)
WaitLoop:
for {
// Wait for either SSH to become available, a timeout to occur,
// or an interrupt to come through.
select {
case <-waitDone:
if err != nil {
ui.Error(fmt.Sprintf("Error waiting for SSH: %s", err))
return multistep.ActionHalt
}
state["communicator"] = comm
break WaitLoop
case <-timeout:
ui.Error("Timeout waiting for SSH.")
s.cancel = true
return multistep.ActionHalt
case <-time.After(1 * time.Second):
if _, ok := state[multistep.StateCancelled]; ok {
log.Println("Interrupt detected, quitting waiting for SSH.")
return multistep.ActionHalt
}
}
}
return multistep.ActionContinue
}
func (s *stepConnectSSH) Cleanup(map[string]interface{}) {
if s.conn != nil {
s.conn.Close()
s.conn = nil
}
}
// This blocks until SSH becomes available, and sends the communicator
// on the given channel.
func (s *stepConnectSSH) waitForSSH(state map[string]interface{}) (packer.Communicator, error) {
config := state["config"].(config) config := state["config"].(config)
instance := state["instance"].(*ec2.Instance) instance := state["instance"].(*ec2.Instance)
privateKey := state["privateKey"].(string) privateKey := state["privateKey"].(string)
@ -28,98 +82,70 @@ func (s *stepConnectSSH) Run(state map[string]interface{}) multistep.StepAction
keyring := &ssh.SimpleKeychain{} keyring := &ssh.SimpleKeychain{}
err := keyring.AddPEMKey(privateKey) err := keyring.AddPEMKey(privateKey)
if err != nil { if err != nil {
err := fmt.Errorf("Error setting up SSH config: %s", err) return nil, fmt.Errorf("Error setting up SSH config: %s", err)
state["error"] = err
ui.Error(err.Error())
return multistep.ActionHalt
}
// Build the actual SSH client configuration
sshConfig := &gossh.ClientConfig{
User: config.SSHUsername,
Auth: []gossh.ClientAuth{
gossh.ClientAuthKeyring(keyring),
},
}
// Start trying to connect to SSH
connected := make(chan bool, 1)
connectQuit := make(chan bool, 1)
defer func() {
connectQuit <- true
}()
go func() {
var err error
ui.Say("Connecting to the instance via SSH...")
attempts := 0
for {
select {
case <-connectQuit:
return
default:
}
attempts += 1
log.Printf(
"Opening TCP conn for SSH to %s:%d (attempt %d)",
instance.DNSName, config.SSHPort, attempts)
s.conn, err = net.Dial("tcp", fmt.Sprintf("%s:%d", instance.DNSName, config.SSHPort))
if err == nil {
break
}
// A brief sleep so we're not being overly zealous attempting
// to connect to the instance.
time.Sleep(500 * time.Millisecond)
}
connected <- true
}()
log.Printf("Waiting up to %s for SSH connection", config.SSHTimeout)
timeout := time.After(config.SSHTimeout)
ConnectWaitLoop:
for {
select {
case <-connected:
// We connected. Just break the loop.
break ConnectWaitLoop
case <-timeout:
err := errors.New("Timeout waiting for SSH to become available.")
state["error"] = err
ui.Error(err.Error())
return multistep.ActionHalt
case <-time.After(1 * time.Second):
if _, ok := state[multistep.StateCancelled]; ok {
log.Println("Interrupt detected, quitting waiting for SSH.")
return multistep.ActionHalt
}
}
} }
ui.Say("Waiting for SSH to become available...")
var comm packer.Communicator var comm packer.Communicator
if err == nil { var nc net.Conn
comm, err = ssh.New(s.conn, sshConfig) for {
if nc != nil {
nc.Close()
}
time.Sleep(5 * time.Second)
if s.cancel {
log.Println("SSH wait cancelled. Exiting loop.")
return nil, errors.New("SSH wait cancelled")
}
// Attempt to connect to SSH port
log.Printf(
"Opening TCP conn for SSH to %s:%d",
instance.DNSName, config.SSHPort)
nc, err := net.Dial("tcp",
fmt.Sprintf("%s:%d", instance.DNSName, config.SSHPort))
if err != nil {
log.Printf("TCP connection to SSH ip/port failed: %s", err)
continue
}
// Build the actual SSH client configuration
sshConfig := &gossh.ClientConfig{
User: config.SSHUsername,
Auth: []gossh.ClientAuth{
gossh.ClientAuthKeyring(keyring),
},
}
sshConnectSuccess := make(chan bool, 1)
go func() {
comm, err = ssh.New(nc, sshConfig)
if err != nil {
log.Printf("SSH connection fail: %s", err)
sshConnectSuccess <- false
return
}
sshConnectSuccess <- true
}()
select {
case success := <-sshConnectSuccess:
if !success {
continue
}
case <-time.After(5 * time.Second):
log.Printf("SSH handshake timeout. Trying again.")
continue
}
ui.Say("Connected via SSH!")
break
} }
if err != nil { // Store the connection so we can close it later
err := fmt.Errorf("Error connecting to SSH: %s", err) s.conn = nc
state["error"] = err return comm, nil
ui.Error(err.Error())
return multistep.ActionHalt
}
// Set the communicator on the state bag so it can be used later
state["communicator"] = comm
return multistep.ActionContinue
}
func (s *stepConnectSSH) Cleanup(map[string]interface{}) {
if s.conn != nil {
s.conn.Close()
}
} }

View File

@ -107,6 +107,9 @@ func (d *DownloadClient) Get() (string, error) {
log.Printf("Downloading: %s", url.String()) log.Printf("Downloading: %s", url.String())
err = d.downloader.Download(f, url) err = d.downloader.Download(f, url)
if err != nil {
return "", err
}
} }
if d.config.Hash != nil { if d.config.Hash != nil {
@ -160,11 +163,22 @@ func (*HTTPDownloader) Cancel() {
} }
func (d *HTTPDownloader) Download(dst io.Writer, src *url.URL) error { func (d *HTTPDownloader) Download(dst io.Writer, src *url.URL) error {
log.Printf("Starting download: %s", src.String())
resp, err := http.Get(src.String()) resp, err := http.Get(src.String())
if err != nil { if err != nil {
return err return err
} }
if resp.StatusCode != 200 {
log.Printf(
"Non-200 status code: %d. Getting error body.", resp.StatusCode)
errorBody := new(bytes.Buffer)
io.Copy(errorBody, resp.Body)
return fmt.Errorf("HTTP error '%d'! Remote side responded:\n%s",
resp.StatusCode, errorBody.String())
}
d.progress = 0 d.progress = 0
d.total = uint(resp.ContentLength) d.total = uint(resp.ContentLength)

View File

@ -0,0 +1,135 @@
package common
import (
"fmt"
"github.com/mitchellh/go-fs"
"github.com/mitchellh/go-fs/fat"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"io"
"io/ioutil"
"log"
"os"
"path/filepath"
)
// StepCreateFloppy will create a floppy disk with the given files.
// The floppy disk doesn't support sub-directories. Only files at the
// root level are supported.
type StepCreateFloppy struct {
Files []string
floppyPath string
}
func (s *StepCreateFloppy) Run(state map[string]interface{}) multistep.StepAction {
if len(s.Files) == 0 {
log.Println("No floppy files specified. Floppy disk will not be made.")
return multistep.ActionContinue
}
ui := state["ui"].(packer.Ui)
ui.Say("Creating floppy disk...")
// Create a temporary file to be our floppy drive
floppyF, err := ioutil.TempFile("", "packer")
if err != nil {
state["error"] = fmt.Errorf("Error creating temporary file for floppy: %s", err)
return multistep.ActionHalt
}
defer floppyF.Close()
// Set the path so we can remove it later
s.floppyPath = floppyF.Name()
log.Printf("Floppy path: %s", floppyF.Name())
// Set the size of the file to be a floppy sized
if err := floppyF.Truncate(1440 * 1024); err != nil {
state["error"] = fmt.Errorf("Error creating floppy: %s", err)
return multistep.ActionHalt
}
// BlockDevice backed by the file for our filesystem
log.Println("Initializing block device backed by temporary file")
device, err := fs.NewFileDisk(floppyF)
if err != nil {
state["error"] = fmt.Errorf("Error creating floppy: %s", err)
return multistep.ActionHalt
}
// Format the block device so it contains a valid FAT filesystem
log.Println("Formatting the block device with a FAT filesystem...")
formatConfig := &fat.SuperFloppyConfig{
FATType: fat.FAT12,
Label: "packer",
OEMName: "packer",
}
if fat.FormatSuperFloppy(device, formatConfig); err != nil {
state["error"] = fmt.Errorf("Error creating floppy: %s", err)
return multistep.ActionHalt
}
// The actual FAT filesystem
log.Println("Initializing FAT filesystem on block device")
fatFs, err := fat.New(device)
if err != nil {
state["error"] = fmt.Errorf("Error creating floppy: %s", err)
return multistep.ActionHalt
}
// Get the root directory to the filesystem
log.Println("Reading the root directory from the filesystem")
rootDir, err := fatFs.RootDir()
if err != nil {
state["error"] = fmt.Errorf("Error creating floppy: %s", err)
return multistep.ActionHalt
}
// Go over each file and copy it.
for _, filename := range s.Files {
ui.Message(fmt.Sprintf("Copying: %s", filepath.Base(filename)))
if s.addSingleFile(rootDir, filename); err != nil {
state["error"] = fmt.Errorf("Error adding file to floppy: %s", err)
return multistep.ActionHalt
}
}
// Set the path to the floppy so it can be used later
state["floppy_path"] = s.floppyPath
return multistep.ActionContinue
}
func (s *StepCreateFloppy) Cleanup(map[string]interface{}) {
if s.floppyPath != "" {
log.Printf("Deleting floppy disk: %s", s.floppyPath)
os.Remove(s.floppyPath)
}
}
func (s *StepCreateFloppy) addSingleFile(dir fs.Directory, src string) error {
log.Printf("Adding file to floppy: %s", src)
inputF, err := os.Open(src)
if err != nil {
return err
}
defer inputF.Close()
entry, err := dir.AddFile(filepath.Base(src))
if err != nil {
return err
}
fatFile, err := entry.File()
if err != nil {
return err
}
if _, err := io.Copy(fatFile, inputF); err != nil {
return err
}
return nil
}

View File

@ -25,29 +25,32 @@ type Builder struct {
} }
type config struct { type config struct {
BootCommand []string `mapstructure:"boot_command"` BootCommand []string `mapstructure:"boot_command"`
BootWait time.Duration `` BootWait time.Duration ``
DiskSize uint `mapstructure:"disk_size"` DiskSize uint `mapstructure:"disk_size"`
GuestAdditionsPath string `mapstructure:"guest_additions_path"` FloppyFiles []string `mapstructure:"floppy_files"`
GuestOSType string `mapstructure:"guest_os_type"` GuestAdditionsPath string `mapstructure:"guest_additions_path"`
Headless bool `mapstructure:"headless"` GuestAdditionsURL string `mapstructure:"guest_additions_url"`
HTTPDir string `mapstructure:"http_directory"` GuestAdditionsSHA256 string `mapstructure:"guest_additions_sha256"`
HTTPPortMin uint `mapstructure:"http_port_min"` GuestOSType string `mapstructure:"guest_os_type"`
HTTPPortMax uint `mapstructure:"http_port_max"` Headless bool `mapstructure:"headless"`
ISOMD5 string `mapstructure:"iso_md5"` HTTPDir string `mapstructure:"http_directory"`
ISOUrl string `mapstructure:"iso_url"` HTTPPortMin uint `mapstructure:"http_port_min"`
OutputDir string `mapstructure:"output_directory"` HTTPPortMax uint `mapstructure:"http_port_max"`
ShutdownCommand string `mapstructure:"shutdown_command"` ISOMD5 string `mapstructure:"iso_md5"`
ShutdownTimeout time.Duration `` ISOUrl string `mapstructure:"iso_url"`
SSHHostPortMin uint `mapstructure:"ssh_host_port_min"` OutputDir string `mapstructure:"output_directory"`
SSHHostPortMax uint `mapstructure:"ssh_host_port_max"` ShutdownCommand string `mapstructure:"shutdown_command"`
SSHPassword string `mapstructure:"ssh_password"` ShutdownTimeout time.Duration ``
SSHPort uint `mapstructure:"ssh_port"` SSHHostPortMin uint `mapstructure:"ssh_host_port_min"`
SSHUser string `mapstructure:"ssh_username"` SSHHostPortMax uint `mapstructure:"ssh_host_port_max"`
SSHWaitTimeout time.Duration `` SSHPassword string `mapstructure:"ssh_password"`
VBoxVersionFile string `mapstructure:"virtualbox_version_file"` SSHPort uint `mapstructure:"ssh_port"`
VBoxManage [][]string `mapstructure:"vboxmanage"` SSHUser string `mapstructure:"ssh_username"`
VMName string `mapstructure:"vm_name"` SSHWaitTimeout time.Duration ``
VBoxVersionFile string `mapstructure:"virtualbox_version_file"`
VBoxManage [][]string `mapstructure:"vboxmanage"`
VMName string `mapstructure:"vm_name"`
PackerBuildName string `mapstructure:"packer_build_name"` PackerBuildName string `mapstructure:"packer_build_name"`
PackerDebug bool `mapstructure:"packer_debug"` PackerDebug bool `mapstructure:"packer_debug"`
@ -71,6 +74,10 @@ func (b *Builder) Prepare(raws ...interface{}) error {
b.config.DiskSize = 40000 b.config.DiskSize = 40000
} }
if b.config.FloppyFiles == nil {
b.config.FloppyFiles = make([]string, 0)
}
if b.config.GuestAdditionsPath == "" { if b.config.GuestAdditionsPath == "" {
b.config.GuestAdditionsPath = "VBoxGuestAdditions.iso" b.config.GuestAdditionsPath = "VBoxGuestAdditions.iso"
} }
@ -170,6 +177,47 @@ func (b *Builder) Prepare(raws ...interface{}) error {
} }
} }
if b.config.GuestAdditionsSHA256 != "" {
b.config.GuestAdditionsSHA256 = strings.ToLower(b.config.GuestAdditionsSHA256)
}
if b.config.GuestAdditionsURL != "" {
url, err := url.Parse(b.config.GuestAdditionsURL)
if err != nil {
errs = append(errs, fmt.Errorf("guest_additions_url is not a valid URL: %s", err))
} else {
if url.Scheme == "" {
url.Scheme = "file"
}
if url.Scheme == "file" {
if _, err := os.Stat(url.Path); err != nil {
errs = append(errs, fmt.Errorf("guest_additions_url points to bad file: %s", err))
}
} else {
supportedSchemes := []string{"file", "http", "https"}
scheme := strings.ToLower(url.Scheme)
found := false
for _, supported := range supportedSchemes {
if scheme == supported {
found = true
break
}
}
if !found {
errs = append(errs, fmt.Errorf("Unsupported URL scheme in guest_additions_url: %s", scheme))
}
}
}
if len(errs) == 0 {
// Put the URL back together since we may have modified it
b.config.GuestAdditionsURL = url.String()
}
}
if _, err := os.Stat(b.config.OutputDir); err == nil { if _, err := os.Stat(b.config.OutputDir); err == nil {
errs = append(errs, errors.New("Output directory already exists. It must not exist.")) errs = append(errs, errors.New("Output directory already exists. It must not exist."))
} }
@ -222,11 +270,15 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
new(stepDownloadGuestAdditions), new(stepDownloadGuestAdditions),
new(stepDownloadISO), new(stepDownloadISO),
new(stepPrepareOutputDir), new(stepPrepareOutputDir),
&common.StepCreateFloppy{
Files: b.config.FloppyFiles,
},
new(stepHTTPServer), new(stepHTTPServer),
new(stepSuppressMessages), new(stepSuppressMessages),
new(stepCreateVM), new(stepCreateVM),
new(stepCreateDisk), new(stepCreateDisk),
new(stepAttachISO), new(stepAttachISO),
new(stepAttachFloppy),
new(stepForwardSSH), new(stepForwardSSH),
new(stepVBoxManage), new(stepVBoxManage),
new(stepRun), new(stepRun),

View File

@ -115,11 +115,38 @@ func TestBuilderPrepare_DiskSize(t *testing.T) {
} }
} }
func TestBuilderPrepare_FloppyFiles(t *testing.T) {
var b Builder
config := testConfig()
delete(config, "floppy_files")
err := b.Prepare(config)
if err != nil {
t.Fatalf("bad err: %s", err)
}
if len(b.config.FloppyFiles) != 0 {
t.Fatalf("bad: %#v", b.config.FloppyFiles)
}
config["floppy_files"] = []string{"foo", "bar"}
b = Builder{}
err = b.Prepare(config)
if err != nil {
t.Fatalf("should not have error: %s", err)
}
expected := []string{"foo", "bar"}
if !reflect.DeepEqual(b.config.FloppyFiles, expected) {
t.Fatalf("bad: %#v", b.config.FloppyFiles)
}
}
func TestBuilderPrepare_GuestAdditionsPath(t *testing.T) { func TestBuilderPrepare_GuestAdditionsPath(t *testing.T) {
var b Builder var b Builder
config := testConfig() config := testConfig()
delete(config, "disk_size") delete(config, "guest_additions_path")
err := b.Prepare(config) err := b.Prepare(config)
if err != nil { if err != nil {
t.Fatalf("bad err: %s", err) t.Fatalf("bad err: %s", err)
@ -141,6 +168,81 @@ func TestBuilderPrepare_GuestAdditionsPath(t *testing.T) {
} }
} }
func TestBuilderPrepare_GuestAdditionsSHA256(t *testing.T) {
var b Builder
config := testConfig()
delete(config, "guest_additions_sha256")
err := b.Prepare(config)
if err != nil {
t.Fatalf("bad err: %s", err)
}
if b.config.GuestAdditionsSHA256 != "" {
t.Fatalf("bad: %s", b.config.GuestAdditionsSHA256)
}
config["guest_additions_sha256"] = "FOO"
b = Builder{}
err = b.Prepare(config)
if err != nil {
t.Fatalf("should not have error: %s", err)
}
if b.config.GuestAdditionsSHA256 != "foo" {
t.Fatalf("bad size: %s", b.config.GuestAdditionsSHA256)
}
}
func TestBuilderPrepare_GuestAdditionsURL(t *testing.T) {
var b Builder
config := testConfig()
config["guest_additions_url"] = ""
err := b.Prepare(config)
if err != nil {
t.Fatalf("err: %s", err)
}
if b.config.GuestAdditionsURL != "" {
t.Fatalf("should be empty: %s", b.config.GuestAdditionsURL)
}
config["guest_additions_url"] = "i/am/a/file/that/doesnt/exist"
err = b.Prepare(config)
if err == nil {
t.Error("should have error")
}
config["guest_additions_url"] = "file:i/am/a/file/that/doesnt/exist"
err = b.Prepare(config)
if err == nil {
t.Error("should have error")
}
config["guest_additions_url"] = "http://www.packer.io"
err = b.Prepare(config)
if err != nil {
t.Errorf("should not have error: %s", err)
}
tf, err := ioutil.TempFile("", "packer")
if err != nil {
t.Fatalf("error tempfile: %s", err)
}
defer os.Remove(tf.Name())
config["guest_additions_url"] = tf.Name()
err = b.Prepare(config)
if err != nil {
t.Fatalf("should not have error: %s", err)
}
if b.config.GuestAdditionsURL != "file://"+tf.Name() {
t.Fatalf("guest_additions_url should be modified: %s", b.config.GuestAdditionsURL)
}
}
func TestBuilderPrepare_HTTPPort(t *testing.T) { func TestBuilderPrepare_HTTPPort(t *testing.T) {
var b Builder var b Builder
config := testConfig() config := testConfig()

View File

@ -59,6 +59,12 @@ func (d *VBox42Driver) IsRunning(name string) (bool, error) {
if line == `VMState="stopping"` { if line == `VMState="stopping"` {
return true, nil return true, nil
} }
// We consider "paused" to still be running. We wait for it to
// be completely stopped or some other state.
if line == `VMState="paused"` {
return true, nil
}
} }
return false, nil return false, nil

View File

@ -0,0 +1,128 @@
package virtualbox
import (
"fmt"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"io"
"io/ioutil"
"log"
"os"
"path/filepath"
)
// This step attaches the ISO to the virtual machine.
//
// Uses:
//
// Produces:
type stepAttachFloppy struct {
floppyPath string
}
func (s *stepAttachFloppy) Run(state map[string]interface{}) multistep.StepAction {
// Determine if we even have a floppy disk to attach
var floppyPath string
if floppyPathRaw, ok := state["floppy_path"]; ok {
floppyPath = floppyPathRaw.(string)
} else {
log.Println("No floppy disk, not attaching.")
return multistep.ActionContinue
}
// VirtualBox is really dumb and can't figure out the format of the file
// without an extension, so we need to add the "vfd" extension to the
// floppy.
floppyPath, err := s.copyFloppy(floppyPath)
if err != nil {
state["error"] = fmt.Errorf("Error preparing floppy: %s", err)
return multistep.ActionHalt
}
driver := state["driver"].(Driver)
ui := state["ui"].(packer.Ui)
vmName := state["vmName"].(string)
ui.Say("Attaching floppy disk...")
// Create the floppy disk controller
command := []string{
"storagectl", vmName,
"--name", "Floppy Controller",
"--add", "floppy",
}
if err := driver.VBoxManage(command...); err != nil {
state["error"] = fmt.Errorf("Error creating floppy controller: %s", err)
return multistep.ActionHalt
}
// Attach the floppy to the controller
command = []string{
"storageattach", vmName,
"--storagectl", "Floppy Controller",
"--port", "0",
"--device", "0",
"--type", "fdd",
"--medium", floppyPath,
}
if err := driver.VBoxManage(command...); err != nil {
state["error"] = fmt.Errorf("Error attaching floppy: %s", err)
return multistep.ActionHalt
}
// Track the path so that we can unregister it from VirtualBox later
s.floppyPath = floppyPath
return multistep.ActionContinue
}
func (s *stepAttachFloppy) Cleanup(state map[string]interface{}) {
if s.floppyPath == "" {
return
}
// Delete the floppy disk
defer os.Remove(s.floppyPath)
driver := state["driver"].(Driver)
vmName := state["vmName"].(string)
command := []string{
"storageattach", vmName,
"--storagectl", "Floppy Controller",
"--port", "0",
"--device", "0",
"--medium", "none",
}
if err := driver.VBoxManage(command...); err != nil {
log.Printf("Error unregistering floppy: %s", err)
}
}
func (s *stepAttachFloppy) copyFloppy(path string) (string, error) {
tempdir, err := ioutil.TempDir("", "packer")
if err != nil {
return "", err
}
floppyPath := filepath.Join(tempdir, "floppy.vfd")
f, err := os.Create(floppyPath)
if err != nil {
return "", err
}
defer f.Close()
sourceF, err := os.Open(path)
if err != nil {
return "", err
}
defer sourceF.Close()
log.Printf("Copying floppy to temp location: %s", floppyPath)
if _, err := io.Copy(f, sourceF); err != nil {
return "", err
}
return floppyPath, nil
}

View File

@ -22,7 +22,10 @@ func (s *stepCreateVM) Run(state map[string]interface{}) multistep.StepAction {
name := config.VMName name := config.VMName
commands := make([][]string, 4) commands := make([][]string, 4)
commands[0] = []string{"createvm", "--name", name, "--ostype", config.GuestOSType, "--register"} commands[0] = []string{
"createvm", "--name", name,
"--ostype", config.GuestOSType, "--register",
}
commands[1] = []string{ commands[1] = []string{
"modifyvm", name, "modifyvm", name,
"--boot1", "disk", "--boot2", "dvd", "--boot3", "none", "--boot4", "none", "--boot1", "disk", "--boot2", "dvd", "--boot3", "none", "--boot4", "none",

View File

@ -33,7 +33,9 @@ func (s *stepDownloadGuestAdditions) Run(state map[string]interface{}) multistep
cache := state["cache"].(packer.Cache) cache := state["cache"].(packer.Cache)
driver := state["driver"].(Driver) driver := state["driver"].(Driver)
ui := state["ui"].(packer.Ui) ui := state["ui"].(packer.Ui)
config := state["config"].(*config)
// Get VBox version
version, err := driver.Version() version, err := driver.Version()
if err != nil { if err != nil {
state["error"] = fmt.Errorf("Error reading version for guest additions download: %s", err) state["error"] = fmt.Errorf("Error reading version for guest additions download: %s", err)
@ -45,68 +47,18 @@ func (s *stepDownloadGuestAdditions) Run(state map[string]interface{}) multistep
version = newVersion version = newVersion
} }
// First things first, we get the list of checksums for the files available
// for this version.
checksumsUrl := fmt.Sprintf("http://download.virtualbox.org/virtualbox/%s/SHA256SUMS", version)
checksumsFile, err := ioutil.TempFile("", "packer")
if err != nil {
state["error"] = fmt.Errorf(
"Failed creating temporary file to store guest addition checksums: %s",
err)
return multistep.ActionHalt
}
checksumsFile.Close()
defer os.Remove(checksumsFile.Name())
downloadConfig := &common.DownloadConfig{
Url: checksumsUrl,
TargetPath: checksumsFile.Name(),
Hash: nil,
}
log.Printf("Downloading guest addition checksums: %s", checksumsUrl)
download := common.NewDownloadClient(downloadConfig)
checksumsPath, action := s.progressDownload(download, state)
if action != multistep.ActionContinue {
return action
}
additionsName := fmt.Sprintf("VBoxGuestAdditions_%s.iso", version) additionsName := fmt.Sprintf("VBoxGuestAdditions_%s.iso", version)
// Next, we find the checksum for the file we're looking to download. // Use provided version or get it from virtualbox.org
// It is an error if the checksum cannot be found. var checksum string
checksumsF, err := os.Open(checksumsPath)
if err != nil {
state["error"] = fmt.Errorf("Error opening guest addition checksums: %s", err)
return multistep.ActionHalt
}
defer checksumsF.Close()
// We copy the contents of the file into memory. In general this file if config.GuestAdditionsSHA256 != "" {
// is quite small so that is okay. In the future, we probably want to checksum = config.GuestAdditionsSHA256
// use bufio and iterate line by line. } else {
var contents bytes.Buffer checksum, action = s.downloadAdditionsSHA256(state, version, additionsName)
io.Copy(&contents, checksumsF) if action != multistep.ActionContinue {
return action
checksum := ""
for _, line := range strings.Split(contents.String(), "\n") {
parts := strings.Fields(line)
log.Printf("Checksum file parts: %#v", parts)
if len(parts) != 2 {
// Bogus line
continue
} }
if strings.HasSuffix(parts[1], additionsName) {
checksum = parts[0]
log.Printf("Guest additions checksum: %s", checksum)
break
}
}
if checksum == "" {
state["error"] = fmt.Errorf("The checksum for the file '%s' could not be found.", additionsName)
return multistep.ActionHalt
} }
checksumBytes, err := hex.DecodeString(checksum) checksumBytes, err := hex.DecodeString(checksum)
@ -115,23 +67,29 @@ func (s *stepDownloadGuestAdditions) Run(state map[string]interface{}) multistep
return multistep.ActionHalt return multistep.ActionHalt
} }
url := fmt.Sprintf( // Use the provided source (URL or file path) or generate it
"http://download.virtualbox.org/virtualbox/%s/%s", url := config.GuestAdditionsURL
version, additionsName) if url == "" {
url = fmt.Sprintf(
"http://download.virtualbox.org/virtualbox/%s/%s",
version,
additionsName)
}
log.Printf("Guest additions URL: %s", url) log.Printf("Guest additions URL: %s", url)
log.Printf("Acquiring lock to download the guest additions ISO.") log.Printf("Acquiring lock to download the guest additions ISO.")
cachePath := cache.Lock(url) cachePath := cache.Lock(url)
defer cache.Unlock(url) defer cache.Unlock(url)
downloadConfig = &common.DownloadConfig{ downloadConfig := &common.DownloadConfig{
Url: url, Url: url,
TargetPath: cachePath, TargetPath: cachePath,
Hash: sha256.New(), Hash: sha256.New(),
Checksum: checksumBytes, Checksum: checksumBytes,
} }
download = common.NewDownloadClient(downloadConfig) download := common.NewDownloadClient(downloadConfig)
ui.Say("Downloading VirtualBox guest additions. Progress will be shown periodically.") ui.Say("Downloading VirtualBox guest additions. Progress will be shown periodically.")
state["guest_additions_path"], action = s.progressDownload(download, state) state["guest_additions_path"], action = s.progressDownload(download, state)
return action return action
@ -179,3 +137,72 @@ DownloadWaitLoop:
return result, multistep.ActionContinue return result, multistep.ActionContinue
} }
func (s *stepDownloadGuestAdditions) downloadAdditionsSHA256(state map[string]interface{}, additionsVersion string, additionsName string) (string, multistep.StepAction) {
// First things first, we get the list of checksums for the files available
// for this version.
checksumsUrl := fmt.Sprintf("http://download.virtualbox.org/virtualbox/%s/SHA256SUMS", additionsVersion)
checksumsFile, err := ioutil.TempFile("", "packer")
if err != nil {
state["error"] = fmt.Errorf(
"Failed creating temporary file to store guest addition checksums: %s",
err)
return "", multistep.ActionHalt
}
defer os.Remove(checksumsFile.Name())
checksumsFile.Close()
downloadConfig := &common.DownloadConfig{
Url: checksumsUrl,
TargetPath: checksumsFile.Name(),
Hash: nil,
}
log.Printf("Downloading guest addition checksums: %s", checksumsUrl)
download := common.NewDownloadClient(downloadConfig)
checksumsPath, action := s.progressDownload(download, state)
if action != multistep.ActionContinue {
return "", action
}
// Next, we find the checksum for the file we're looking to download.
// It is an error if the checksum cannot be found.
checksumsF, err := os.Open(checksumsPath)
if err != nil {
state["error"] = fmt.Errorf("Error opening guest addition checksums: %s", err)
return "", multistep.ActionHalt
}
defer checksumsF.Close()
// We copy the contents of the file into memory. In general this file
// is quite small so that is okay. In the future, we probably want to
// use bufio and iterate line by line.
var contents bytes.Buffer
io.Copy(&contents, checksumsF)
checksum := ""
for _, line := range strings.Split(contents.String(), "\n") {
parts := strings.Fields(line)
log.Printf("Checksum file parts: %#v", parts)
if len(parts) != 2 {
// Bogus line
continue
}
if strings.HasSuffix(parts[1], additionsName) {
checksum = parts[0]
log.Printf("Guest additions checksum: %s", checksum)
break
}
}
if checksum == "" {
state["error"] = fmt.Errorf("The checksum for the file '%s' could not be found.", additionsName)
return "", multistep.ActionHalt
}
return checksum, multistep.ActionContinue
}

View File

@ -7,8 +7,7 @@ import (
"path/filepath" "path/filepath"
) )
// This step creates the virtual disk that will be used as the // This step cleans up forwarded ports and exports the VM to an OVF.
// hard drive for the virtual machine.
// //
// Uses: // Uses:
// //
@ -22,9 +21,38 @@ func (s *stepExport) Run(state map[string]interface{}) multistep.StepAction {
ui := state["ui"].(packer.Ui) ui := state["ui"].(packer.Ui)
vmName := state["vmName"].(string) vmName := state["vmName"].(string)
// Clear out the Packer-created forwarding rule
ui.Say("Preparing to export machine...")
ui.Message(fmt.Sprintf("Deleting forwarded port mapping for SSH (host port %d)", state["sshHostPort"]))
command := []string{"modifyvm", vmName, "--natpf1", "delete", "packerssh"}
if err := driver.VBoxManage(command...); err != nil {
err := fmt.Errorf("Error deleting port forwarding rule: %s", err)
state["error"] = err
ui.Error(err.Error())
return multistep.ActionHalt
}
// Remove the attached floppy disk, if it exists
if _, ok := state["floppy_path"]; ok {
ui.Message("Removing floppy drive...")
command := []string{
"storageattach", vmName,
"--storagectl", "Floppy Controller",
"--port", "0",
"--device", "0",
"--medium", "none",
}
if err := driver.VBoxManage(command...); err != nil {
state["error"] = fmt.Errorf("Error removing floppy: %s", err)
return multistep.ActionHalt
}
}
// Export the VM to an OVF
outputPath := filepath.Join(config.OutputDir, "packer.ovf") outputPath := filepath.Join(config.OutputDir, "packer.ovf")
command := []string{ command = []string{
"export", "export",
vmName, vmName,
"--output", "--output",

View File

@ -36,7 +36,7 @@ func (s *stepForwardSSH) Run(state map[string]interface{}) multistep.StepAction
} }
} }
// Attach the disk to the controller // Create a forwarded port mapping to the VM
ui.Say(fmt.Sprintf("Creating forwarded port mapping for SSH (host port %d)", sshHostPort)) ui.Say(fmt.Sprintf("Creating forwarded port mapping for SSH (host port %d)", sshHostPort))
command := []string{ command := []string{
"modifyvm", vmName, "modifyvm", vmName,

View File

@ -27,7 +27,7 @@ func (s *stepRun) Run(state map[string]interface{}) multistep.StepAction {
if config.Headless == true { if config.Headless == true {
ui.Message("WARNING: The VM will be started in headless mode, as configured.\n" + ui.Message("WARNING: The VM will be started in headless mode, as configured.\n" +
"In headless mode, errors during the boot sequence or OS setup\n" + "In headless mode, errors during the boot sequence or OS setup\n" +
"won't be easily visible. Use at your own discresion.") "won't be easily visible. Use at your own discretion.")
guiArgument = "headless" guiArgument = "headless"
} }
command := []string{"startvm", vmName, "--type", guiArgument} command := []string{"startvm", vmName, "--type", guiArgument}

View File

@ -28,6 +28,7 @@ type Builder struct {
type config struct { type config struct {
DiskName string `mapstructure:"vmdk_name"` DiskName string `mapstructure:"vmdk_name"`
DiskSize uint `mapstructure:"disk_size"` DiskSize uint `mapstructure:"disk_size"`
FloppyFiles []string `mapstructure:"floppy_files"`
GuestOSType string `mapstructure:"guest_os_type"` GuestOSType string `mapstructure:"guest_os_type"`
ISOMD5 string `mapstructure:"iso_md5"` ISOMD5 string `mapstructure:"iso_md5"`
ISOUrl string `mapstructure:"iso_url"` ISOUrl string `mapstructure:"iso_url"`
@ -76,6 +77,10 @@ func (b *Builder) Prepare(raws ...interface{}) error {
b.config.DiskSize = 40000 b.config.DiskSize = 40000
} }
if b.config.FloppyFiles == nil {
b.config.FloppyFiles = make([]string, 0)
}
if b.config.GuestOSType == "" { if b.config.GuestOSType == "" {
b.config.GuestOSType = "other" b.config.GuestOSType = "other"
} }
@ -230,6 +235,9 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
&stepPrepareTools{}, &stepPrepareTools{},
&stepDownloadISO{}, &stepDownloadISO{},
&stepPrepareOutputDir{}, &stepPrepareOutputDir{},
&common.StepCreateFloppy{
Files: b.config.FloppyFiles,
},
&stepCreateDisk{}, &stepCreateDisk{},
&stepCreateVMX{}, &stepCreateVMX{},
&stepHTTPServer{}, &stepHTTPServer{},
@ -241,6 +249,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
&stepProvision{}, &stepProvision{},
&stepShutdown{}, &stepShutdown{},
&stepCleanFiles{}, &stepCleanFiles{},
&stepCleanVMX{},
&stepCompactDisk{}, &stepCompactDisk{},
} }

View File

@ -4,6 +4,7 @@ import (
"github.com/mitchellh/packer/packer" "github.com/mitchellh/packer/packer"
"io/ioutil" "io/ioutil"
"os" "os"
"reflect"
"testing" "testing"
"time" "time"
) )
@ -107,6 +108,33 @@ func TestBuilderPrepare_DiskSize(t *testing.T) {
} }
} }
func TestBuilderPrepare_FloppyFiles(t *testing.T) {
var b Builder
config := testConfig()
delete(config, "floppy_files")
err := b.Prepare(config)
if err != nil {
t.Fatalf("bad err: %s", err)
}
if len(b.config.FloppyFiles) != 0 {
t.Fatalf("bad: %#v", b.config.FloppyFiles)
}
config["floppy_files"] = []string{"foo", "bar"}
b = Builder{}
err = b.Prepare(config)
if err != nil {
t.Fatalf("should not have error: %s", err)
}
expected := []string{"foo", "bar"}
if !reflect.DeepEqual(b.config.FloppyFiles, expected) {
t.Fatalf("bad: %#v", b.config.FloppyFiles)
}
}
func TestBuilderPrepare_HTTPPort(t *testing.T) { func TestBuilderPrepare_HTTPPort(t *testing.T) {
var b Builder var b Builder
config := testConfig() config := testConfig()

View File

@ -0,0 +1,72 @@
package vmware
import (
"fmt"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"io/ioutil"
"log"
"os"
"strings"
)
// This step cleans up the VMX by removing or changing this prior to
// being ready for use.
//
// Uses:
// ui packer.Ui
// vmx_path string
//
// Produces:
// <nothing>
type stepCleanVMX struct{}
func (s stepCleanVMX) Run(state map[string]interface{}) multistep.StepAction {
if _, ok := state["floppy_path"]; !ok {
return multistep.ActionContinue
}
ui := state["ui"].(packer.Ui)
vmxPath := state["vmx_path"].(string)
vmxData, err := s.readVMX(vmxPath)
if err != nil {
state["error"] = fmt.Errorf("Error reading VMX: %s", err)
return multistep.ActionHalt
}
// Delete the floppy0 entries so the floppy is no longer mounted
ui.Say("Unmounting floppy from VMX...")
for k, _ := range vmxData {
if strings.HasPrefix(k, "floppy0.") {
log.Printf("Deleting key: %s", k)
delete(vmxData, k)
}
}
vmxData["floppy0.present"] = "FALSE"
// Rewrite the VMX
if err := WriteVMX(vmxPath, vmxData); err != nil {
state["error"] = fmt.Errorf("Error writing VMX: %s", err)
return multistep.ActionHalt
}
return multistep.ActionContinue
}
func (stepCleanVMX) Cleanup(map[string]interface{}) {}
func (stepCleanVMX) readVMX(vmxPath string) (map[string]string, error) {
vmxF, err := os.Open(vmxPath)
if err != nil {
return nil, err
}
defer vmxF.Close()
vmxBytes, err := ioutil.ReadAll(vmxF)
if err != nil {
return nil, err
}
return ParseVMX(string(vmxBytes)), nil
}

View File

@ -36,10 +36,10 @@ func (stepCreateVMX) Run(state map[string]interface{}) multistep.StepAction {
ui.Say("Building and writing VMX file") ui.Say("Building and writing VMX file")
tplData := &vmxTemplateData{ tplData := &vmxTemplateData{
config.VMName, Name: config.VMName,
config.GuestOSType, GuestOS: config.GuestOSType,
config.DiskName, DiskName: config.DiskName,
isoPath, ISOPath: isoPath,
} }
var buf bytes.Buffer var buf bytes.Buffer
@ -55,6 +55,13 @@ func (stepCreateVMX) Run(state map[string]interface{}) multistep.StepAction {
} }
} }
if floppyPathRaw, ok := state["floppy_path"]; ok {
log.Println("Floppy path present, setting in VMX")
vmxData["floppy0.present"] = "TRUE"
vmxData["floppy0.fileType"] = "file"
vmxData["floppy0.fileName"] = floppyPathRaw.(string)
}
vmxPath := filepath.Join(config.OutputDir, config.VMName+".vmx") vmxPath := filepath.Join(config.OutputDir, config.VMName+".vmx")
if err := WriteVMX(vmxPath, vmxData); err != nil { if err := WriteVMX(vmxPath, vmxData); err != nil {
err := fmt.Errorf("Error creating VMX file: %s", err) err := fmt.Errorf("Error creating VMX file: %s", err)

View File

@ -3,6 +3,7 @@ package ssh
import ( import (
"bytes" "bytes"
"code.google.com/p/go.crypto/ssh" "code.google.com/p/go.crypto/ssh"
"errors"
"fmt" "fmt"
"github.com/mitchellh/packer/packer" "github.com/mitchellh/packer/packer"
"io" "io"
@ -145,6 +146,14 @@ func (c *comm) Upload(path string, input io.Reader) error {
// Otherwise, we have an ExitErorr, meaning we can just read // Otherwise, we have an ExitErorr, meaning we can just read
// the exit status // the exit status
log.Printf("non-zero exit status: %d", exitErr.ExitStatus()) log.Printf("non-zero exit status: %d", exitErr.ExitStatus())
// If we exited with status 127, it means SCP isn't available.
// Return a more descriptive error for that.
if exitErr.ExitStatus() == 127 {
return errors.New(
"SCP failed to start. This usually means that SCP is not\n" +
"properly installed on the remote system.")
}
} }
return err return err

View File

@ -45,4 +45,3 @@ func TestPasswordKeybardInteractive_Challenge(t *testing.T) {
t.Fatalf("invalid password: %#v", result) t.Fatalf("invalid password: %#v", result)
} }
} }

View File

@ -35,6 +35,7 @@ const defaultConfig = `
}, },
"provisioners": { "provisioners": {
"file": "packer-provisioner-file",
"shell": "packer-provisioner-shell", "shell": "packer-provisioner-shell",
"chef-solo": "packer-provisioner-chef-solo" "chef-solo": "packer-provisioner-chef-solo"
} }

View File

@ -27,7 +27,9 @@ func main() {
runtime.GOMAXPROCS(runtime.NumCPU()) runtime.GOMAXPROCS(runtime.NumCPU())
} }
log.Printf("Packer Version: %s %s", packer.Version, packer.VersionPrerelease) log.Printf(
"Packer Version: %s %s %s",
packer.Version, packer.VersionPrerelease, packer.GitCommit)
log.Printf("Packer Target OS/Arch: %s %s", runtime.GOOS, runtime.GOARCH) log.Printf("Packer Target OS/Arch: %s %s", runtime.GOOS, runtime.GOARCH)
config, err := loadConfig() config, err := loadConfig()
@ -100,7 +102,7 @@ func loadConfig() (*config, error) {
mustExist = false mustExist = false
if err != nil { if err != nil {
log.Printf("Error detecing default config file path: %s", err) log.Printf("Error detecting default config file path: %s", err)
} }
} }

View File

@ -153,12 +153,14 @@ func (c *CommunicatorServer) Start(args *CommunicatorStartArgs, reply *interface
var cmd packer.RemoteCmd var cmd packer.RemoteCmd
cmd.Command = args.Command cmd.Command = args.Command
toClose := make([]net.Conn, 0)
if args.StdinAddress != "" { if args.StdinAddress != "" {
stdinC, err := net.Dial("tcp", args.StdinAddress) stdinC, err := net.Dial("tcp", args.StdinAddress)
if err != nil { if err != nil {
return err return err
} }
toClose = append(toClose, stdinC)
cmd.Stdin = stdinC cmd.Stdin = stdinC
} }
@ -168,6 +170,7 @@ func (c *CommunicatorServer) Start(args *CommunicatorStartArgs, reply *interface
return err return err
} }
toClose = append(toClose, stdoutC)
cmd.Stdout = stdoutC cmd.Stdout = stdoutC
} }
@ -177,6 +180,7 @@ func (c *CommunicatorServer) Start(args *CommunicatorStartArgs, reply *interface
return err return err
} }
toClose = append(toClose, stderrC)
cmd.Stderr = stderrC cmd.Stderr = stderrC
} }
@ -196,6 +200,9 @@ func (c *CommunicatorServer) Start(args *CommunicatorStartArgs, reply *interface
// exit. When it does, report it back to caller... // exit. When it does, report it back to caller...
go func() { go func() {
defer responseC.Close() defer responseC.Close()
for _, conn := range toClose {
defer conn.Close()
}
for !cmd.Exited { for !cmd.Exited {
time.Sleep(50 * time.Millisecond) time.Sleep(50 * time.Millisecond)

View File

@ -10,6 +10,7 @@ import (
"os/signal" "os/signal"
"strings" "strings"
"sync" "sync"
"unicode"
) )
type UiColor uint type UiColor uint
@ -110,7 +111,7 @@ func (u *PrefixedUi) prefixLines(prefix, message string) string {
result.WriteString(fmt.Sprintf("%s: %s\n", prefix, line)) result.WriteString(fmt.Sprintf("%s: %s\n", prefix, line))
} }
return strings.TrimSpace(result.String()) return strings.TrimRightFunc(result.String(), unicode.IsSpace)
} }
func (rw *ReaderWriterUi) Ask(query string) (string, error) { func (rw *ReaderWriterUi) Ask(query string) (string, error) {

View File

@ -5,8 +5,12 @@ import (
"fmt" "fmt"
) )
// The git commit that is being compiled. This will be filled in by the
// compiler for source builds.
var GitCommit string
// The version of packer. // The version of packer.
const Version = "0.1.5" const Version = "0.1.6"
// Any pre-release marker for the version. If this is "" (empty string), // Any pre-release marker for the version. If this is "" (empty string),
// then it means that it is a final release. Otherwise, this is the // then it means that it is a final release. Otherwise, this is the
@ -27,6 +31,10 @@ func (versionCommand) Run(env Environment, args []string) int {
fmt.Fprintf(&versionString, "Packer v%s", Version) fmt.Fprintf(&versionString, "Packer v%s", Version)
if VersionPrerelease != "" { if VersionPrerelease != "" {
fmt.Fprintf(&versionString, ".%s", VersionPrerelease) fmt.Fprintf(&versionString, ".%s", VersionPrerelease)
if GitCommit != "" {
fmt.Fprintf(&versionString, " (%s)", GitCommit)
}
} }
env.Ui().Say(versionString.String()) env.Ui().Say(versionString.String())

View File

@ -1,8 +1,8 @@
package main package main
import ( import (
"github.com/jvandyke/packer/provisioner/chef-solo"
"github.com/mitchellh/packer/packer/plugin" "github.com/mitchellh/packer/packer/plugin"
"../../provisioner/chef-solo"
) )
func main() { func main() {

View File

@ -0,0 +1,10 @@
package main
import (
"github.com/mitchellh/packer/packer/plugin"
"github.com/mitchellh/packer/provisioner/file"
)
func main() {
plugin.ServeProvisioner(new(file.Provisioner))
}

View File

@ -5,6 +5,7 @@ import (
"github.com/mitchellh/mapstructure" "github.com/mitchellh/mapstructure"
"github.com/mitchellh/packer/packer" "github.com/mitchellh/packer/packer"
"io/ioutil" "io/ioutil"
"log"
"os" "os"
"path/filepath" "path/filepath"
"strings" "strings"
@ -75,14 +76,17 @@ func (p *AWSBoxPostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact
vagrantfileContents := defaultAWSVagrantfile vagrantfileContents := defaultAWSVagrantfile
if p.config.VagrantfileTemplate != "" { if p.config.VagrantfileTemplate != "" {
log.Printf("Using vagrantfile template: %s", p.config.VagrantfileTemplate)
f, err := os.Open(p.config.VagrantfileTemplate) f, err := os.Open(p.config.VagrantfileTemplate)
if err != nil { if err != nil {
err = fmt.Errorf("error opening vagrantfile template: %s", err)
return nil, false, err return nil, false, err
} }
defer f.Close() defer f.Close()
contents, err := ioutil.ReadAll(f) contents, err := ioutil.ReadAll(f)
if err != nil { if err != nil {
err = fmt.Errorf("error reading vagrantfile template: %s", err)
return nil, false, err return nil, false, err
} }
@ -101,6 +105,7 @@ func (p *AWSBoxPostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact
// Compress the directory to the given output path // Compress the directory to the given output path
if err := DirToBox(outputPath, dir); err != nil { if err := DirToBox(outputPath, dir); err != nil {
err = fmt.Errorf("error creating box: %s", err)
return nil, false, err return nil, false, err
} }

View File

@ -24,11 +24,15 @@ type Config struct {
} }
type PostProcessor struct { type PostProcessor struct {
config Config config Config
premade map[string]packer.PostProcessor premade map[string]packer.PostProcessor
rawConfigs []interface{}
} }
func (p *PostProcessor) Configure(raws ...interface{}) error { func (p *PostProcessor) Configure(raws ...interface{}) error {
// Store the raw configs for usage later
p.rawConfigs = raws
for _, raw := range raws { for _, raw := range raws {
err := mapstructure.Decode(raw, &p.config) err := mapstructure.Decode(raw, &p.config)
if err != nil { if err != nil {
@ -36,8 +40,10 @@ func (p *PostProcessor) Configure(raws ...interface{}) error {
} }
} }
ppExtraConfig := make(map[string]interface{})
if p.config.OutputPath == "" { if p.config.OutputPath == "" {
p.config.OutputPath = "packer_{{ .BuildName }}_{{.Provider}}.box" p.config.OutputPath = "packer_{{ .BuildName }}_{{.Provider}}.box"
ppExtraConfig["output"] = p.config.OutputPath
} }
_, err := template.New("output").Parse(p.config.OutputPath) _, err := template.New("output").Parse(p.config.OutputPath)
@ -45,16 +51,15 @@ func (p *PostProcessor) Configure(raws ...interface{}) error {
return fmt.Errorf("output invalid template: %s", err) return fmt.Errorf("output invalid template: %s", err)
} }
// Store the extra configuration for post-processors
p.rawConfigs = append(p.rawConfigs, ppExtraConfig)
// TODO(mitchellh): Properly handle multiple raw configs // TODO(mitchellh): Properly handle multiple raw configs
var mapConfig map[string]interface{} var mapConfig map[string]interface{}
if err := mapstructure.Decode(raws[0], &mapConfig); err != nil { if err := mapstructure.Decode(raws[0], &mapConfig); err != nil {
return err return err
} }
packerConfig := map[string]interface{}{
packer.BuildNameConfigKey: p.config.PackerBuildName,
}
p.premade = make(map[string]packer.PostProcessor) p.premade = make(map[string]packer.PostProcessor)
errors := make([]error, 0) errors := make([]error, 0)
for k, raw := range mapConfig { for k, raw := range mapConfig {
@ -63,7 +68,12 @@ func (p *PostProcessor) Configure(raws ...interface{}) error {
continue continue
} }
if err := pp.Configure(raw, packerConfig); err != nil { // Create the proper list of configurations
ppConfigs := make([]interface{}, 0, len(p.rawConfigs)+1)
copy(ppConfigs, p.rawConfigs)
ppConfigs = append(ppConfigs, raw)
if err := pp.Configure(ppConfigs...); err != nil {
errors = append(errors, err) errors = append(errors, err)
} }
@ -93,8 +103,7 @@ func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (pac
return nil, false, fmt.Errorf("Vagrant box post-processor not found: %s", ppName) return nil, false, fmt.Errorf("Vagrant box post-processor not found: %s", ppName)
} }
config := map[string]string{"output": p.config.OutputPath} if err := pp.Configure(p.rawConfigs...); err != nil {
if err := pp.Configure(config); err != nil {
return nil, false, err return nil, false, err
} }
} }

View File

@ -21,11 +21,32 @@ type OutputPathTemplate struct {
Provider string Provider string
} }
// Copies a file by copying the contents of the file to another place.
func CopyContents(dst, src string) error {
srcF, err := os.Open(src)
if err != nil {
return err
}
defer srcF.Close()
dstF, err := os.Create(dst)
if err != nil {
return err
}
defer dstF.Close()
if _, err := io.Copy(dstF, srcF); err != nil {
return err
}
return nil
}
// DirToBox takes the directory and compresses it into a Vagrant-compatible // DirToBox takes the directory and compresses it into a Vagrant-compatible
// box. This function does not perform checks to verify that dir is // box. This function does not perform checks to verify that dir is
// actually a proper box. This is an expected precondition. // actually a proper box. This is an expected precondition.
func DirToBox(dst, dir string) error { func DirToBox(dst, dir string) error {
log.Printf("Turning dir into box: %s", dir) log.Printf("Turning dir into box: %s => %s", dir, dst)
dstF, err := os.Create(dst) dstF, err := os.Create(dst)
if err != nil { if err != nil {
return err return err
@ -47,7 +68,7 @@ func DirToBox(dst, dir string) error {
// Skip directories // Skip directories
if info.IsDir() { if info.IsDir() {
log.Printf("Skiping directory '%s' for box '%s'", path, dst) log.Printf("Skipping directory '%s' for box '%s'", path, dst)
return nil return nil
} }

View File

@ -5,7 +5,6 @@ import (
"fmt" "fmt"
"github.com/mitchellh/mapstructure" "github.com/mitchellh/mapstructure"
"github.com/mitchellh/packer/packer" "github.com/mitchellh/packer/packer"
"io"
"io/ioutil" "io/ioutil"
"log" "log"
"os" "os"
@ -66,19 +65,9 @@ func (p *VBoxBoxPostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifac
// Copy all of the original contents into the temporary directory // Copy all of the original contents into the temporary directory
for _, path := range artifact.Files() { for _, path := range artifact.Files() {
ui.Message(fmt.Sprintf("Copying: %s", path)) ui.Message(fmt.Sprintf("Copying: %s", path))
src, err := os.Open(path)
if err != nil {
return nil, false, err
}
defer src.Close()
dst, err := os.Create(filepath.Join(dir, filepath.Base(path))) dstPath := filepath.Join(dir, filepath.Base(path))
if err != nil { if err := CopyContents(dstPath, path); err != nil {
return nil, false, err
}
defer dst.Close()
if _, err := io.Copy(dst, src); err != nil {
return nil, false, err return nil, false, err
} }
} }

View File

@ -4,7 +4,6 @@ import (
"fmt" "fmt"
"github.com/mitchellh/mapstructure" "github.com/mitchellh/mapstructure"
"github.com/mitchellh/packer/packer" "github.com/mitchellh/packer/packer"
"io"
"io/ioutil" "io/ioutil"
"os" "os"
"path/filepath" "path/filepath"
@ -51,19 +50,9 @@ func (p *VMwareBoxPostProcessor) PostProcess(ui packer.Ui, artifact packer.Artif
// Copy all of the original contents into the temporary directory // Copy all of the original contents into the temporary directory
for _, path := range artifact.Files() { for _, path := range artifact.Files() {
ui.Message(fmt.Sprintf("Copying: %s", path)) ui.Message(fmt.Sprintf("Copying: %s", path))
src, err := os.Open(path)
if err != nil {
return nil, false, err
}
defer src.Close()
dst, err := os.Create(filepath.Join(dir, filepath.Base(path))) dstPath := filepath.Join(dir, filepath.Base(path))
if err != nil { if err := CopyContents(dstPath, path); err != nil {
return nil, false, err
}
defer dst.Close()
if _, err := io.Copy(dst, src); err != nil {
return nil, false, err return nil, false, err
} }
} }

View File

@ -0,0 +1,57 @@
package file
import (
"errors"
"fmt"
"github.com/mitchellh/mapstructure"
"github.com/mitchellh/packer/packer"
"os"
)
type config struct {
// The local path of the file to upload.
Source string
// The remote path where the local file will be uploaded to.
Destination string
}
type Provisioner struct {
config config
}
func (p *Provisioner) Prepare(raws ...interface{}) error {
for _, raw := range raws {
if err := mapstructure.Decode(raw, &p.config); err != nil {
return err
}
}
errs := []error{}
if _, err := os.Stat(p.config.Source); err != nil {
errs = append(errs,
fmt.Errorf("Bad source '%s': %s", p.config.Source, err))
}
if p.config.Destination == "" {
errs = append(errs, errors.New("Destination must be specified."))
}
if len(errs) > 0 {
return &packer.MultiError{errs}
}
return nil
}
func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error {
ui.Say(fmt.Sprintf("Uploading %s => %s", p.config.Source, p.config.Destination))
f, err := os.Open(p.config.Source)
if err != nil {
return err
}
defer f.Close()
return comm.Upload(p.config.Destination, f)
}

View File

@ -0,0 +1,147 @@
package file
import (
"github.com/mitchellh/packer/packer"
"io"
"io/ioutil"
"os"
"strings"
"testing"
)
func testConfig() map[string]interface{} {
return map[string]interface{}{
"destination": "something",
}
}
func TestProvisioner_Impl(t *testing.T) {
var raw interface{}
raw = &Provisioner{}
if _, ok := raw.(packer.Provisioner); !ok {
t.Fatalf("must be a provisioner")
}
}
func TestProvisionerPrepare_InvalidSource(t *testing.T) {
var p Provisioner
config := testConfig()
config["source"] = "/this/should/not/exist"
err := p.Prepare(config)
if err == nil {
t.Fatalf("should require existing file")
}
}
func TestProvisionerPrepare_ValidSource(t *testing.T) {
var p Provisioner
tf, err := ioutil.TempFile("", "packer")
if err != nil {
t.Fatalf("error tempfile: %s", err)
}
defer os.Remove(tf.Name())
config := testConfig()
config["source"] = tf.Name()
err = p.Prepare(config)
if err != nil {
t.Fatalf("should allow valid file: %s", err)
}
}
func TestProvisionerPrepare_EmptyDestination(t *testing.T) {
var p Provisioner
config := testConfig()
delete(config, "destination")
err := p.Prepare(config)
if err == nil {
t.Fatalf("should require destination path")
}
}
type stubUploadCommunicator struct {
dest string
data []byte
}
func (suc *stubUploadCommunicator) Download(src string, data io.Writer) error {
return nil
}
func (suc *stubUploadCommunicator) Upload(dest string, data io.Reader) error {
var err error
suc.dest = dest
suc.data, err = ioutil.ReadAll(data)
return err
}
func (suc *stubUploadCommunicator) Start(cmd *packer.RemoteCmd) error {
return nil
}
type stubUi struct {
sayMessages string
}
func (su *stubUi) Ask(string) (string, error) {
return "", nil
}
func (su *stubUi) Error(string) {
}
func (su *stubUi) Message(string) {
}
func (su *stubUi) Say(msg string) {
su.sayMessages += msg
}
func TestProvisionerProvision_SendsFile(t *testing.T) {
var p Provisioner
tf, err := ioutil.TempFile("", "packer")
if err != nil {
t.Fatalf("error tempfile: %s", err)
}
defer os.Remove(tf.Name())
if _, err = tf.Write([]byte("hello")); err != nil {
t.Fatalf("error writing tempfile: %s", err)
}
config := map[string]interface{}{
"source": tf.Name(),
"destination": "something",
}
if err := p.Prepare(config); err != nil {
t.Fatalf("err: %s", err)
}
ui := &stubUi{}
comm := &stubUploadCommunicator{}
err = p.Provision(ui, comm)
if err != nil {
t.Fatalf("should successfully provision: %s", err)
}
if !strings.Contains(ui.sayMessages, tf.Name()) {
t.Fatalf("should print source filename")
}
if !strings.Contains(ui.sayMessages, "something") {
t.Fatalf("should print destination filename")
}
if comm.dest != "something" {
t.Fatalf("should upload to configured destination")
}
if string(comm.data) != "hello" {
t.Fatalf("should upload with source file's data")
}
}

View File

@ -25,6 +25,9 @@ type config struct {
// in the context of a single shell. // in the context of a single shell.
Inline []string Inline []string
// The shebang value used when running inline scripts.
InlineShebang string `mapstructure:"inline_shebang"`
// The local path of the shell script to upload and execute. // The local path of the shell script to upload and execute.
Script string Script string
@ -62,13 +65,17 @@ func (p *Provisioner) Prepare(raws ...interface{}) error {
} }
if p.config.ExecuteCommand == "" { if p.config.ExecuteCommand == "" {
p.config.ExecuteCommand = "{{.Vars}} sh {{.Path}}" p.config.ExecuteCommand = "chmod +x {{.Path}}; {{.Vars}} {{.Path}}"
} }
if p.config.Inline != nil && len(p.config.Inline) == 0 { if p.config.Inline != nil && len(p.config.Inline) == 0 {
p.config.Inline = nil p.config.Inline = nil
} }
if p.config.InlineShebang == "" {
p.config.InlineShebang = "/bin/sh"
}
if p.config.RemotePath == "" { if p.config.RemotePath == "" {
p.config.RemotePath = DefaultRemotePath p.config.RemotePath = DefaultRemotePath
} }
@ -136,6 +143,7 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error {
// Write our contents to it // Write our contents to it
writer := bufio.NewWriter(tf) writer := bufio.NewWriter(tf)
writer.WriteString(fmt.Sprintf("#!%s\n", p.config.InlineShebang))
for _, command := range p.config.Inline { for _, command := range p.config.Inline {
if _, err := writer.WriteString(command + "\n"); err != nil { if _, err := writer.WriteString(command + "\n"); err != nil {
return fmt.Errorf("Error preparing shell script: %s", err) return fmt.Errorf("Error preparing shell script: %s", err)
@ -157,6 +165,7 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error {
if err != nil { if err != nil {
return fmt.Errorf("Error opening shell script: %s", err) return fmt.Errorf("Error opening shell script: %s", err)
} }
defer f.Close()
log.Printf("Uploading %s => %s", path, p.config.RemotePath) log.Printf("Uploading %s => %s", path, p.config.RemotePath)
err = comm.Upload(p.config.RemotePath, f) err = comm.Upload(p.config.RemotePath, f)
@ -164,6 +173,9 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error {
return fmt.Errorf("Error uploading shell script: %s", err) return fmt.Errorf("Error uploading shell script: %s", err)
} }
// Close the original file since we copied it
f.Close()
// Flatten the environment variables // Flatten the environment variables
flattendVars := strings.Join(p.config.Vars, " ") flattendVars := strings.Join(p.config.Vars, " ")

View File

@ -35,6 +35,33 @@ func TestProvisionerPrepare_Defaults(t *testing.T) {
} }
} }
func TestProvisionerPrepare_InlineShebang(t *testing.T) {
config := testConfig()
delete(config, "inline_shebang")
p := new(Provisioner)
err := p.Prepare(config)
if err != nil {
t.Fatalf("should not have error: %s", err)
}
if p.config.InlineShebang != "/bin/sh" {
t.Fatalf("bad value: %s", p.config.InlineShebang)
}
// Test with a good one
config["inline_shebang"] = "foo"
p = new(Provisioner)
err = p.Prepare(config)
if err != nil {
t.Fatalf("should not have error: %s", err)
}
if p.config.InlineShebang != "foo" {
t.Fatalf("bad value: %s", p.config.InlineShebang)
}
}
func TestProvisionerPrepare_Script(t *testing.T) { func TestProvisionerPrepare_Script(t *testing.T) {
config := testConfig() config := testConfig()
delete(config, "inline") delete(config, "inline")

View File

@ -1,4 +1,6 @@
#!/bin/bash #!/bin/bash
#
# This script only builds the application from source.
set -e set -e
NO_COLOR="\x1b[0m" NO_COLOR="\x1b[0m"
@ -14,13 +16,23 @@ DIR="$( cd -P "$( dirname "$SOURCE" )/.." && pwd )"
# Change into that directory # Change into that directory
cd $DIR cd $DIR
# Get the git commit
GIT_COMMIT=$(git rev-parse --short HEAD)
GIT_DIRTY=$(test -n "`git status --porcelain`" && echo "+CHANGES" || true)
# Compile the main Packer app # Compile the main Packer app
echo -e "${OK_COLOR}--> Compiling Packer${NO_COLOR}" echo -e "${OK_COLOR}--> Compiling Packer${NO_COLOR}"
go build -v -o bin/packer . go build \
-ldflags "-X github.com/mitchellh/packer/packer.GitCommit ${GIT_COMMIT}${GIT_DIRTY}" \
-v \
-o bin/packer .
# Go over each plugin and build it # Go over each plugin and build it
for PLUGIN in $(find ./plugin -mindepth 1 -maxdepth 1 -type d); do for PLUGIN in $(find ./plugin -mindepth 1 -maxdepth 1 -type d); do
PLUGIN_NAME=$(basename ${PLUGIN}) PLUGIN_NAME=$(basename ${PLUGIN})
echo -e "${OK_COLOR}--> Compiling Plugin: ${PLUGIN_NAME}${NO_COLOR}" echo -e "${OK_COLOR}--> Compiling Plugin: ${PLUGIN_NAME}${NO_COLOR}"
go build -v -o bin/packer-${PLUGIN_NAME} ${PLUGIN} go build \
-ldflags "-X github.com/mitchellh/packer/packer.GitCommit ${GIT_COMMIT}${GIT_DIRTY}" \
-v \
-o bin/packer-${PLUGIN_NAME} ${PLUGIN}
done done

View File

@ -1,5 +1,7 @@
source 'https://rubygems.org' source 'https://rubygems.org'
ruby '1.9.3'
gem "middleman", "~> 3.0.6" gem "middleman", "~> 3.0.6"
gem "middleman-minify-html", "~> 3.0.0" gem "middleman-minify-html", "~> 3.0.0"
gem "rack-contrib", "~> 1.1.0" gem "rack-contrib", "~> 1.1.0"

View File

@ -81,7 +81,7 @@ Here is a basic example. It is completely valid except for the access keys:
"secret_key": "YOUR SECRET KEY HERE", "secret_key": "YOUR SECRET KEY HERE",
"region": "us-east-1", "region": "us-east-1",
"source_ami": "ami-de0d9eb7", "source_ami": "ami-de0d9eb7",
"instance_type": "m1.small", "instance_type": "t1.micro",
"ssh_username": "ubuntu", "ssh_username": "ubuntu",
"ami_name": "packer-quick-start {{.CreateTime}}" "ami_name": "packer-quick-start {{.CreateTime}}"
} }

View File

@ -89,7 +89,7 @@ the prior linked page for information on syntax if you're unfamiliar with it.
The available variables are shown below: The available variables are shown below:
* `CreateTime`- This will be replaced with the Unix timestamp of when the * `CreateTime` - This will be replaced with the Unix timestamp of when the
image is created. image is created.
## Finding Image, Region, and Size IDs ## Finding Image, Region, and Size IDs

View File

@ -27,7 +27,8 @@ Ubuntu to self-install. Still, the example serves to show the basic configuratio
"iso_url": "http://releases.ubuntu.com/12.04/ubuntu-12.04.2-server-amd64.iso", "iso_url": "http://releases.ubuntu.com/12.04/ubuntu-12.04.2-server-amd64.iso",
"iso_md5": "af5f788aee1b32c4b2634734309cc9e9", "iso_md5": "af5f788aee1b32c4b2634734309cc9e9",
"ssh_username": "packer", "ssh_username": "packer",
"ssh_wait_timeout": "30s" "ssh_wait_timeout": "30s",
"shutdown_command": "shutdown -P now"
} }
</pre> </pre>
@ -69,12 +70,29 @@ Optional:
* `disk_size` (int) - The size, in megabytes, of the hard disk to create * `disk_size` (int) - The size, in megabytes, of the hard disk to create
for the VM. By default, this is 40000 (40 GB). for the VM. By default, this is 40000 (40 GB).
* `floppy_files` (array of strings) - A list of files to put onto a floppy
disk that is attached when the VM is booted for the first time. This is
most useful for unattended Windows installs, which look for an
`Autounattend.xml` file on removable media. By default no floppy will
be attached. The files listed in this configuration will all be put
into the root directory of the floppy disk; sub-directories are not supported.
* `guest_additions_path` (string) - The path on the guest virtual machine * `guest_additions_path` (string) - The path on the guest virtual machine
where the VirtualBox guest additions ISO will be uploaded. By default this where the VirtualBox guest additions ISO will be uploaded. By default this
is "VBoxGuestAdditions.iso" which should upload into the login directory is "VBoxGuestAdditions.iso" which should upload into the login directory
of the user. This is a [configuration template](/docs/templates/configuration-templates.html) of the user. This is a [configuration template](/docs/templates/configuration-templates.html)
where the `Version` variable is replaced with the VirtualBox version. where the `Version` variable is replaced with the VirtualBox version.
* `guest_additions_sha256` (string) - The SHA256 checksum of the guest
additions ISO that will be uploaded to the guest VM. By default the
checksums will be downloaded from the VirtualBox website, so this only
needs to be set if you want to be explicit about the checksum.
* `guest_additions_url` (string) - The URL to the guest additions ISO
to upload. This can also be a file URL if the ISO is at a local path.
By default the VirtualBox builder will go and download the proper
guest additions ISO from the internet.
* `guest_os_type` (string) - The guest OS type being installed. By default * `guest_os_type` (string) - The guest OS type being installed. By default
this is "other", but you can get _dramatic_ performance improvements by this is "other", but you can get _dramatic_ performance improvements by
setting this to the proper value. To view all available values for this setting this to the proper value. To view all available values for this

View File

@ -28,7 +28,8 @@ Ubuntu to self-install. Still, the example serves to show the basic configuratio
"iso_url": "http://releases.ubuntu.com/12.04/ubuntu-12.04.2-server-amd64.iso", "iso_url": "http://releases.ubuntu.com/12.04/ubuntu-12.04.2-server-amd64.iso",
"iso_md5": "af5f788aee1b32c4b2634734309cc9e9", "iso_md5": "af5f788aee1b32c4b2634734309cc9e9",
"ssh_username": "packer", "ssh_username": "packer",
"ssh_wait_timeout": "30s" "ssh_wait_timeout": "30s",
"shutdown_command": "shutdown -P now"
} }
</pre> </pre>
@ -72,6 +73,13 @@ Optional:
actual file representing the disk will not use the full size unless it is full. actual file representing the disk will not use the full size unless it is full.
By default this is set to 40,000 (40 GB). By default this is set to 40,000 (40 GB).
* `floppy_files` (array of strings) - A list of files to put onto a floppy
disk that is attached when the VM is booted for the first time. This is
most useful for unattended Windows installs, which look for an
`Autounattend.xml` file on removable media. By default no floppy will
be attached. The files listed in this configuration will all be put
into the root directory of the floppy disk; sub-directories are not supported.
* `guest_os_type` (string) - The guest OS type being installed. This will be * `guest_os_type` (string) - The guest OS type being installed. This will be
set in the VMware VMX. By default this is "other". By specifying a more specific set in the VMware VMX. By default this is "other". By specifying a more specific
OS type, VMware may perform some optimizations or virtual hardware changes OS type, VMware may perform some optimizations or virtual hardware changes

View File

@ -14,7 +14,7 @@ artifacts that are created will be outputted at the end of the build.
* `-debug` - Disables parallelization and enables debug mode. Debug mode flags * `-debug` - Disables parallelization and enables debug mode. Debug mode flags
the builders that they should output debugging information. The exact behavior the builders that they should output debugging information. The exact behavior
of debug mode is left to the builder. In general, builders usually will stop of debug mode is left to the builder. In general, builders usually will stop
between each step, waiting keyboard input before continuing. This will allow between each step, waiting for keyboard input before continuing. This will allow
the user to inspect state and so on. the user to inspect state and so on.
* `-except=foo,bar,baz` - Builds all the builds except those with the given * `-except=foo,bar,baz` - Builds all the builds except those with the given

View File

@ -8,7 +8,7 @@ Plugins allow new functionality to be added to Packer without
modifying the core source code. Packer plugins are able to add new modifying the core source code. Packer plugins are able to add new
commands, builders, provisioners, hooks, and more. In fact, much of Packer commands, builders, provisioners, hooks, and more. In fact, much of Packer
itself is implemented by writing plugins that are simply distributed with itself is implemented by writing plugins that are simply distributed with
the Packer. For example, all the commands, builders, provisioners, and more Packer. For example, all the commands, builders, provisioners, and more
that ship with Packer are implemented as Plugins that are simply hardcoded that ship with Packer are implemented as Plugins that are simply hardcoded
to load with Packer. to load with Packer.

View File

@ -0,0 +1,34 @@
---
layout: "docs"
---
# File Provisioner
Type: `file`
The file provisioner uploads files to machines built by Packer. The
recommended usage of the file provisioner is to use it to upload files,
and then use [shell provisioner](/docs/provisioners/shell.html) to move
them to the proper place, set permissions, etc.
## Basic Example
<pre class="prettyprint">
{
"type": "file",
"source": "app.tar.gz",
"destination": "/tmp/app.tar.gz"
}
</pre>
## Configuration Reference
The available configuration options are listed below. All elements are required.
* `source` (string) - The path to a local file to upload to the machine. The
path can be absolute or relative. If it is relative, it is relative to the
working directory when Packer is executed.
* `destination` (string) - The path where the file will be uploaded to in the
machine. This value must be a writable location and any parent directories
must already exist.

View File

@ -23,7 +23,7 @@ The example below is fully functional.
## Configuration Reference ## Configuration Reference
The reference of available configuratin options is listed below. The only The reference of available configuration options is listed below. The only
required element is either "inline" or "script". Every other option is optional. required element is either "inline" or "script". Every other option is optional.
Exactly _one_ of the following is required: Exactly _one_ of the following is required:
@ -51,12 +51,17 @@ Optional parameters:
`key=value`. `key=value`.
* `execute_command` (string) - The command to use to execute the script. * `execute_command` (string) - The command to use to execute the script.
By default this is `{{ .Vars }} sh {{ .Path }}`. The value of this is By default this is `chmod +x {{ .Path }}; {{ .Vars }} {{ .Path }}`. The value of this is
treated as [configuration template](/docs/templates/configuration- treated as [configuration template](/docs/templates/configuration-
templates.html). There are two available variables: `Path`, which is templates.html). There are two available variables: `Path`, which is
the path to the script to run, and `Vars`, which is the list of the path to the script to run, and `Vars`, which is the list of
`environment_vars`, if configured. `environment_vars`, if configured.
* `inline_shebang` (string) - The
[shebang](http://en.wikipedia.org/wiki/Shebang_(Unix)) value to use when
running commands specified by `inline`. By default, this is `/bin/sh`.
If you're not using `inline`, then this configuration has no effect.
* `remote_path` (string) - The path where the script will be uploaded to * `remote_path` (string) - The path where the script will be uploaded to
in the machine. This defaults to "/tmp/script.sh". This value must be in the machine. This defaults to "/tmp/script.sh". This value must be
a writable location and any parent directories must already exist. a writable location and any parent directories must already exist.

View File

@ -33,8 +33,8 @@ Along with each key, it is noted whether it is required or not.
information on how to define and configure a provisioner, read the information on how to define and configure a provisioner, read the
sub-section on [configuring provisioners in templates](/docs/templates/provisioners.html). sub-section on [configuring provisioners in templates](/docs/templates/provisioners.html).
* `post-processors` (optional) is an array of that defines the various * `post-processors` (optional) is an array of one or more objects that defines the
post-processing steps to take with the built images. This is an optional various post-processing steps to take with the built images. This is an optional
field. If not specified, then no post-processing will be done. For more field. If not specified, then no post-processing will be done. For more
information on what post-processors do and how they're defined, read the information on what post-processors do and how they're defined, read the
sub-section on [configuring post-processors in templates](/docs/templates/post-processors.html). sub-section on [configuring post-processors in templates](/docs/templates/post-processors.html).
@ -54,6 +54,7 @@ missing valid AWS access keys. Otherwise, it would work properly with
"secret_key": "...", "secret_key": "...",
"region": "us-east-1", "region": "us-east-1",
"source_ami": "ami-de0d9eb7", "source_ami": "ami-de0d9eb7",
"instance_type": "t1.micro",
"ssh_username": "ubuntu", "ssh_username": "ubuntu",
"ami_name": "packer {{.CreateTime}}" "ami_name": "packer {{.CreateTime}}"
} }
@ -62,7 +63,7 @@ missing valid AWS access keys. Otherwise, it would work properly with
"provisioners": [ "provisioners": [
{ {
"type": "shell", "type": "shell",
"path": "setup_things.sh" "script": "setup_things.sh"
} }
] ]
} }

View File

@ -72,7 +72,7 @@ A **sequence definition** is a JSON array comprised of other **simple** or
**detailed** definitions. The post-processors defined in the array are run **detailed** definitions. The post-processors defined in the array are run
in order, with the artifact of each feeding into the next, and any intermediary in order, with the artifact of each feeding into the next, and any intermediary
artifacts being discarded. A sequence definition may not contain another artifacts being discarded. A sequence definition may not contain another
sequence definition. Sequnce definitions are used to chain together multiple sequence definition. Sequence definitions are used to chain together multiple
post-processors. An example is shown below, where the artifact of a build is post-processors. An example is shown below, where the artifact of a build is
compressed then uploaded, but the compressed result is not kept. compressed then uploaded, but the compressed result is not kept.
@ -93,7 +93,7 @@ are simply shortcuts for a **sequence** definition of only one element.
## Input Artifacts ## Input Artifacts
When using post-processors, the input artifact (coming from a builder or When using post-processors, the input artifact (coming from a builder or
another post-proccessor) is discarded by default after the post-processor runs. another post-processor) is discarded by default after the post-processor runs.
This is because generally, you don't want the intermediary artifacts on the This is because generally, you don't want the intermediary artifacts on the
way to the final artifact created. way to the final artifact created.

View File

@ -39,7 +39,7 @@ the `type` key. This key specifies the name of the provisioner to use.
Additional keys within the object are used to configure the provisioner, Additional keys within the object are used to configure the provisioner,
with the exception of a handful of special keys, covered later. with the exception of a handful of special keys, covered later.
As an example, the "shell" provisioner requires at least the `script` key, As an example, the "shell" provisioner requires a key such as `script`
which specifies a path to a shell script to execute within the machines which specifies a path to a shell script to execute within the machines
being created. being created.

View File

@ -36,6 +36,7 @@
<ul> <ul>
<li><h4>Provisioners</h4></li> <li><h4>Provisioners</h4></li>
<li><a href="/docs/provisioners/shell.html">Shell Scripts</a></li> <li><a href="/docs/provisioners/shell.html">Shell Scripts</a></li>
<li><a href="/docs/provisioners/file.html">File Uploads</a></li>
<li><a href="/docs/provisioners/custom.html">Custom</a></li> <li><a href="/docs/provisioners/custom.html">Custom</a></li>
</ul> </ul>