Merge branch 'master' into f-vtolstov-compress

This commit is contained in:
Chris Bednarski 2015-06-12 14:20:07 -07:00
commit 68fec5012e
75 changed files with 1268 additions and 1932 deletions

View File

@ -1,5 +1,14 @@
## 0.8.0 (unreleased)
BACKWARDS INCOMPATIBILITIES:
* builder/digitalocean: no longer supports the v1 API which has been
deprecated for some time. Most configurations should continue to
work as long as you use the `api_token` field for auth.
* builder/digitalocean: `image`, `region`, and `size` are now required.
* builder/openstack: auth parameters have been changed to better
reflect OS terminology. Existing environment variables still work.
FEATURES:
* **New config function: `template_dir`**: The directory to the template
@ -8,11 +17,19 @@ FEATURES:
IMPROVEMENTS:
* core: Interrupt handling for SIGTERM signal as well. [GH-1858]
* builder/digitalocean: Save SSH key to pwd if debug mode is on. [GH-1829]
* builder/digitalocean: User data support [GH-2113]
* builder/parallels: Support Parallels Desktop 11 [GH-2199]
* builder/openstack: Add `rackconnect_wait` for Rackspace customers to wait for
RackConnect data to appear
* buidler/openstakc: Add `ssh_interface` option for rackconnect for users that
* buidler/openstack: Add `ssh_interface` option for rackconnect for users that
have prohibitive firewalls
* builder/openstack: Flavor names can be used as well as refs
* builder/openstack: Add `availability_zone` [GH-2016]
* builder/virtualbox: Added option: `ssh_skip_nat_mapping` to skip the
automatic port forward for SSH and to use the guest port directly. [GH-1078]
* builder/virtualbox: Added SCSI support
* builder/vmware: Support for additional disks [GH-1382]
* command/push: Add `-name` flag for specifying name from CLI [GH-2042]
* command/push: Push configuration in templates supports variables [GH-1861]
* post-processor/docker-save: Can be chained [GH-2179]
@ -22,7 +39,10 @@ IMPROVEMENTS:
BUG FIXES:
* core: Fix potential panic for post-processor plugin exits [GH-2098]
* builder/amazon: Allow spaces in AMI names when using `clean_ami_name` [GH-2182]
* builder/amazon: Remove deprecated ec2-upload-bundle paramger [GH-1931]
* builder/amazon: Use IAM Profile to upload bundle if provided [GH-1985]
* builder/amazon: Use correct exit code after SSH authentication failed [GH-2004]
* builder/amazon: Retry finding created instance for eventual
consistency. [GH-2129]
* builder/amazon: If no AZ is specified, use AZ chosen automatically by
@ -31,8 +51,13 @@ BUG FIXES:
is deleted on cleanup. [GH-1801]
* builder/amazon: AMI copy won't copy to the source region [GH-2123]
* builder/amazon: Validate AMI doesn't exist with name prior to build [GH-1774]
* builder/amazon: Improved retry logic around waiting for instances. [GH-1764]
* builder/amazon: Fix issues with creating Block Devices. [GH-2195]
* builder/amazon/chroot: Retry waiting for disk attachments [GH-2046]
* builder/amazon/instance: Use `-i` in sudo commands so PATH is inherited. [GH-1930]
* builder/amazon/instance: Use `--region` flag for bundle upload command. [GH-1931]
* builder/digitalocean: Wait for droplet to unlock before changing state,
should lower the "pending event" errors.
* builder/digitalocean: Ignore invalid fields from the ever-changing v2 API
* builder/digitalocean: Private images can be used as a source [GH-1792]
* builder/docker: Fixed hang on prompt while copying script
@ -46,12 +71,15 @@ BUG FIXES:
OS installers. [GH-1709]
* builder/virtualbox: Remove the floppy controller in addition to the
floppy disk. [GH-1879]
* builder/virtualbox: Fixed regression where downloading ISO without a
".iso" extension didn't work. [GH-1839]
* builder/vmware: Add 100ms delay between keystrokes to avoid subtle
timing issues in most cases. [GH-1663]
* builder/vmware: Bind HTTP server to IPv4, which is more compatible with
OS installers. [GH-1709]
* builder/vmware: Case-insensitive match of MAC address to find IP [GH-1989]
* builder/vmware: More robust IP parsing from ifconfig output [GH-1999]
* builder/vmware: Nested output directories for ESXi work [GH-2174]
* command/validate: don't crash for invalid builds [GH-2139]
* post-processor/atlas: Find common archive prefix for Windows [GH-1874]
* post-processor/atlas: Fix index out of range panic [GH-1959]
@ -59,6 +87,8 @@ BUG FIXES:
* post-processor/vagrant-cloud: Don't delete version on error [GH-2014]
* provisioner/puppet-masterless: Allow manifest_file to be a directory
* provisioner/salt-masterless: Add `--retcode-passthrough` to salt-call
* provisioner/shell: chmod executable script to 0755, not 0777 [GH-1708]
* provisioner/shell: inline commands failing will fail the provisioner [GH-2069]
## 0.7.5 (December 9, 2014)

View File

@ -29,13 +29,23 @@ func buildBlockDevices(b []BlockDevice) []*ec2.BlockDeviceMapping {
for _, blockDevice := range b {
ebsBlockDevice := &ec2.EBSBlockDevice{
SnapshotID: &blockDevice.SnapshotId,
Encrypted: &blockDevice.Encrypted,
IOPS: &blockDevice.IOPS,
VolumeType: &blockDevice.VolumeType,
VolumeSize: &blockDevice.VolumeSize,
DeleteOnTermination: &blockDevice.DeleteOnTermination,
}
// IOPS is only valid for SSD Volumes
if blockDevice.VolumeType != "" && blockDevice.VolumeType != "standard" && blockDevice.VolumeType != "gp2" {
ebsBlockDevice.IOPS = &blockDevice.IOPS
}
// You cannot specify Encrypted if you specify a Snapshot ID
if blockDevice.SnapshotId != "" {
ebsBlockDevice.SnapshotID = &blockDevice.SnapshotId
} else {
ebsBlockDevice.Encrypted = &blockDevice.Encrypted
}
mapping := &ec2.BlockDeviceMapping{
EBS: ebsBlockDevice,
DeviceName: &blockDevice.DeviceName,

View File

@ -5,6 +5,7 @@ import (
"testing"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awsutil"
"github.com/aws/aws-sdk-go/service/ec2"
)
@ -28,11 +29,48 @@ func TestBlockDevice(t *testing.T) {
DeviceName: aws.String("/dev/sdb"),
VirtualName: aws.String("ephemeral0"),
EBS: &ec2.EBSBlockDevice{
Encrypted: aws.Boolean(false),
SnapshotID: aws.String("snap-1234"),
VolumeType: aws.String("standard"),
VolumeSize: aws.Long(8),
DeleteOnTermination: aws.Boolean(true),
},
},
},
{
Config: &BlockDevice{
DeviceName: "/dev/sdb",
VolumeSize: 8,
},
Result: &ec2.BlockDeviceMapping{
DeviceName: aws.String("/dev/sdb"),
VirtualName: aws.String(""),
EBS: &ec2.EBSBlockDevice{
Encrypted: aws.Boolean(false),
VolumeType: aws.String(""),
VolumeSize: aws.Long(8),
DeleteOnTermination: aws.Boolean(false),
},
},
},
{
Config: &BlockDevice{
DeviceName: "/dev/sdb",
VirtualName: "ephemeral0",
VolumeType: "io1",
VolumeSize: 8,
DeleteOnTermination: true,
IOPS: 1000,
},
Result: &ec2.BlockDeviceMapping{
DeviceName: aws.String("/dev/sdb"),
VirtualName: aws.String("ephemeral0"),
EBS: &ec2.EBSBlockDevice{
Encrypted: aws.Boolean(false),
VolumeType: aws.String("io1"),
VolumeSize: aws.Long(8),
DeleteOnTermination: aws.Boolean(true),
IOPS: aws.Long(1000),
},
},
@ -48,11 +86,11 @@ func TestBlockDevice(t *testing.T) {
expected := []*ec2.BlockDeviceMapping{tc.Result}
got := blockDevices.BuildAMIDevices()
if !reflect.DeepEqual(expected, got) {
t.Fatalf("bad: %#v", expected)
t.Fatalf("Bad block device, \nexpected: %s\n\ngot: %s", awsutil.StringValue(expected), awsutil.StringValue(got))
}
if !reflect.DeepEqual(expected, blockDevices.BuildLaunchDevices()) {
t.Fatalf("bad: %#v", expected)
t.Fatalf("Bad block device, \nexpected: %s\n\ngot: %s", awsutil.StringValue(expected), awsutil.StringValue(blockDevices.BuildLaunchDevices()))
}
}
}

View File

@ -67,10 +67,10 @@ func AMIStateRefreshFunc(conn *ec2.EC2, imageId string) StateRefreshFunc {
// InstanceStateRefreshFunc returns a StateRefreshFunc that is used to watch
// an EC2 instance.
func InstanceStateRefreshFunc(conn *ec2.EC2, i *ec2.Instance) StateRefreshFunc {
func InstanceStateRefreshFunc(conn *ec2.EC2, instanceId string) StateRefreshFunc {
return func() (interface{}, string, error) {
resp, err := conn.DescribeInstances(&ec2.DescribeInstancesInput{
InstanceIDs: []*string{i.InstanceID},
InstanceIDs: []*string{&instanceId},
})
if err != nil {
if ec2err, ok := err.(awserr.Error); ok && ec2err.Code() == "InvalidInstanceID.NotFound" {
@ -91,7 +91,7 @@ func InstanceStateRefreshFunc(conn *ec2.EC2, i *ec2.Instance) StateRefreshFunc {
return nil, "", nil
}
i = resp.Reservations[0].Instances[0]
i := resp.Reservations[0].Instances[0]
return i, *i.State.Name, nil
}
}

View File

@ -223,31 +223,12 @@ func (s *StepRunSourceInstance) Run(state multistep.StateBag) multistep.StepActi
instanceId = *spotResp.SpotInstanceRequests[0].InstanceID
}
instanceResp, err := ec2conn.DescribeInstances(&ec2.DescribeInstancesInput{
InstanceIDs: []*string{&instanceId}})
for i := 0; i < 10; i++ {
if err == nil {
break
}
time.Sleep(3 * time.Second)
instanceResp, err = ec2conn.DescribeInstances(&ec2.DescribeInstancesInput{
InstanceIDs: []*string{&instanceId}})
}
if err != nil {
err := fmt.Errorf("Error finding source instance (%s): %s", instanceId, err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
s.instance = instanceResp.Reservations[0].Instances[0]
ui.Message(fmt.Sprintf("Instance ID: %s", *s.instance.InstanceID))
ui.Say(fmt.Sprintf("Waiting for instance (%s) to become ready...", *s.instance.InstanceID))
ui.Message(fmt.Sprintf("Instance ID: %s", instanceId))
ui.Say(fmt.Sprintf("Waiting for instance (%v) to become ready...", instanceId))
stateChange := StateChangeConf{
Pending: []string{"pending"},
Target: "running",
Refresh: InstanceStateRefreshFunc(ec2conn, s.instance),
Refresh: InstanceStateRefreshFunc(ec2conn, instanceId),
StepState: state,
}
latestInstance, err := WaitForState(&stateChange)
@ -329,7 +310,7 @@ func (s *StepRunSourceInstance) Cleanup(state multistep.StateBag) {
}
stateChange := StateChangeConf{
Pending: []string{"pending", "running", "shutting-down", "stopped", "stopping"},
Refresh: InstanceStateRefreshFunc(ec2conn, s.instance),
Refresh: InstanceStateRefreshFunc(ec2conn, *s.instance.InstanceID),
Target: "terminated",
}

View File

@ -20,7 +20,7 @@ func isalphanumeric(b byte) bool {
// Clean up AMI name by replacing invalid characters with "-"
func templateCleanAMIName(s string) string {
allowed := []byte{'(', ')', ',', '/', '-', '_'}
allowed := []byte{'(', ')', ',', '/', '-', '_', ' '}
b := []byte(s)
newb := make([]byte, len(b))
for i, c := range b {

View File

@ -5,8 +5,8 @@ import (
)
func TestAMITemplatePrepare_clean(t *testing.T) {
origName := "AMZamz09(),/-_:&^$%"
expected := "AMZamz09(),/-_-----"
origName := "AMZamz09(),/-_:&^ $%"
expected := "AMZamz09(),/-_--- --"
name := templateCleanAMIName(origName)

View File

@ -40,7 +40,7 @@ func (s *stepStopInstance) Run(state multistep.StateBag) multistep.StepAction {
stateChange := awscommon.StateChangeConf{
Pending: []string{"running", "stopping"},
Target: "stopped",
Refresh: awscommon.InstanceStateRefreshFunc(ec2conn, instance),
Refresh: awscommon.InstanceStateRefreshFunc(ec2conn, *instance.InstanceID),
StepState: state,
}
_, err = awscommon.WaitForState(&stateChange)

View File

@ -73,15 +73,25 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) {
}
if b.config.BundleUploadCommand == "" {
b.config.BundleUploadCommand = "sudo -i -n ec2-upload-bundle " +
"-b {{.BucketName}} " +
"-m {{.ManifestPath}} " +
"-a {{.AccessKey}} " +
"-s {{.SecretKey}} " +
"-d {{.BundleDirectory}} " +
"--batch " +
"--region {{.Region}} " +
"--retry"
if b.config.IamInstanceProfile != "" {
b.config.BundleUploadCommand = "sudo -i -n ec2-upload-bundle " +
"-b {{.BucketName}} " +
"-m {{.ManifestPath}} " +
"-d {{.BundleDirectory}} " +
"--batch " +
"--region {{.Region}} " +
"--retry"
} else {
b.config.BundleUploadCommand = "sudo -i -n ec2-upload-bundle " +
"-b {{.BucketName}} " +
"-m {{.ManifestPath}} " +
"-a {{.AccessKey}} " +
"-s {{.SecretKey}} " +
"-d {{.BundleDirectory}} " +
"--batch " +
"--region {{.Region}} " +
"--retry"
}
}
if b.config.BundleVolCommand == "" {

View File

@ -1,76 +0,0 @@
// All of the methods used to communicate with the digital_ocean API
// are here. Their API is on a path to V2, so just plain JSON is used
// in place of a proper client library for now.
package digitalocean
type Region struct {
Slug string `json:"slug"`
Name string `json:"name"`
// v1 only
Id uint `json:"id,omitempty"`
// v2 only
Sizes []string `json:"sizes,omitempty"`
Available bool `json:"available,omitempty"`
Features []string `json:"features,omitempty"`
}
type RegionsResp struct {
Regions []Region
}
type Size struct {
Slug string `json:"slug"`
// v1 only
Id uint `json:"id,omitempty"`
Name string `json:"name,omitempty"`
// v2 only
Memory uint `json:"memory,omitempty"`
VCPUS uint `json:"vcpus,omitempty"`
Disk uint `json:"disk,omitempty"`
Transfer float64 `json:"transfer,omitempty"`
PriceMonthly float64 `json:"price_monthly,omitempty"`
PriceHourly float64 `json:"price_hourly,omitempty"`
}
type SizesResp struct {
Sizes []Size
}
type Image struct {
Id uint `json:"id"`
Name string `json:"name"`
Slug string `json:"slug"`
Distribution string `json:"distribution"`
// v2 only
Public bool `json:"public,omitempty"`
ActionIds []string `json:"action_ids,omitempty"`
CreatedAt string `json:"created_at,omitempty"`
}
type ImagesResp struct {
Images []Image
}
type DigitalOceanClient interface {
CreateKey(string, string) (uint, error)
DestroyKey(uint) error
CreateDroplet(string, string, string, string, uint, bool) (uint, error)
DestroyDroplet(uint) error
PowerOffDroplet(uint) error
ShutdownDroplet(uint) error
CreateSnapshot(uint, string) error
Images() ([]Image, error)
DestroyImage(uint) error
DropletStatus(uint) (string, string, error)
Image(string) (Image, error)
Regions() ([]Region, error)
Region(string) (Region, error)
Sizes() ([]Size, error)
Size(string) (Size, error)
}

View File

@ -1,382 +0,0 @@
// All of the methods used to communicate with the digital_ocean API
// are here. Their API is on a path to V2, so just plain JSON is used
// in place of a proper client library for now.
package digitalocean
import (
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"log"
"net/http"
"net/url"
"strconv"
"strings"
"time"
"github.com/mitchellh/mapstructure"
)
type DigitalOceanClientV1 struct {
// The http client for communicating
client *http.Client
// Credentials
ClientID string
APIKey string
// The base URL of the API
APIURL string
}
// Creates a new client for communicating with DO
func DigitalOceanClientNewV1(client string, key string, url string) *DigitalOceanClientV1 {
c := &DigitalOceanClientV1{
client: &http.Client{
Transport: &http.Transport{
Proxy: http.ProxyFromEnvironment,
},
},
APIURL: url,
ClientID: client,
APIKey: key,
}
return c
}
// Creates an SSH Key and returns it's id
func (d DigitalOceanClientV1) CreateKey(name string, pub string) (uint, error) {
params := url.Values{}
params.Set("name", name)
params.Set("ssh_pub_key", pub)
body, err := NewRequestV1(d, "ssh_keys/new", params)
if err != nil {
return 0, err
}
// Read the SSH key's ID we just created
key := body["ssh_key"].(map[string]interface{})
keyId := key["id"].(float64)
return uint(keyId), nil
}
// Destroys an SSH key
func (d DigitalOceanClientV1) DestroyKey(id uint) error {
path := fmt.Sprintf("ssh_keys/%v/destroy", id)
_, err := NewRequestV1(d, path, url.Values{})
return err
}
// Creates a droplet and returns it's id
func (d DigitalOceanClientV1) CreateDroplet(name string, size string, image string, region string, keyId uint, privateNetworking bool) (uint, error) {
params := url.Values{}
params.Set("name", name)
found_size, err := d.Size(size)
if err != nil {
return 0, fmt.Errorf("Invalid size or lookup failure: '%s': %s", size, err)
}
found_image, err := d.Image(image)
if err != nil {
return 0, fmt.Errorf("Invalid image or lookup failure: '%s': %s", image, err)
}
found_region, err := d.Region(region)
if err != nil {
return 0, fmt.Errorf("Invalid region or lookup failure: '%s': %s", region, err)
}
params.Set("size_slug", found_size.Slug)
params.Set("image_slug", found_image.Slug)
params.Set("region_slug", found_region.Slug)
params.Set("ssh_key_ids", fmt.Sprintf("%v", keyId))
params.Set("private_networking", fmt.Sprintf("%v", privateNetworking))
body, err := NewRequestV1(d, "droplets/new", params)
if err != nil {
return 0, err
}
// Read the Droplets ID
droplet := body["droplet"].(map[string]interface{})
dropletId := droplet["id"].(float64)
return uint(dropletId), err
}
// Destroys a droplet
func (d DigitalOceanClientV1) DestroyDroplet(id uint) error {
path := fmt.Sprintf("droplets/%v/destroy", id)
_, err := NewRequestV1(d, path, url.Values{})
return err
}
// Powers off a droplet
func (d DigitalOceanClientV1) PowerOffDroplet(id uint) error {
path := fmt.Sprintf("droplets/%v/power_off", id)
_, err := NewRequestV1(d, path, url.Values{})
return err
}
// Shutsdown a droplet. This is a "soft" shutdown.
func (d DigitalOceanClientV1) ShutdownDroplet(id uint) error {
path := fmt.Sprintf("droplets/%v/shutdown", id)
_, err := NewRequestV1(d, path, url.Values{})
return err
}
// Creates a snaphot of a droplet by it's ID
func (d DigitalOceanClientV1) CreateSnapshot(id uint, name string) error {
path := fmt.Sprintf("droplets/%v/snapshot", id)
params := url.Values{}
params.Set("name", name)
_, err := NewRequestV1(d, path, params)
return err
}
// Returns all available images.
func (d DigitalOceanClientV1) Images() ([]Image, error) {
resp, err := NewRequestV1(d, "images", url.Values{})
if err != nil {
return nil, err
}
var result ImagesResp
if err := mapstructure.Decode(resp, &result); err != nil {
return nil, err
}
return result.Images, nil
}
// Destroys an image by its ID.
func (d DigitalOceanClientV1) DestroyImage(id uint) error {
path := fmt.Sprintf("images/%d/destroy", id)
_, err := NewRequestV1(d, path, url.Values{})
return err
}
// Returns DO's string representation of status "off" "new" "active" etc.
func (d DigitalOceanClientV1) DropletStatus(id uint) (string, string, error) {
path := fmt.Sprintf("droplets/%v", id)
body, err := NewRequestV1(d, path, url.Values{})
if err != nil {
return "", "", err
}
var ip string
// Read the droplet's "status"
droplet := body["droplet"].(map[string]interface{})
status := droplet["status"].(string)
if droplet["ip_address"] != nil {
ip = droplet["ip_address"].(string)
}
return ip, status, err
}
// Sends an api request and returns a generic map[string]interface of
// the response.
func NewRequestV1(d DigitalOceanClientV1, path string, params url.Values) (map[string]interface{}, error) {
client := d.client
// Add the authentication parameters
params.Set("client_id", d.ClientID)
params.Set("api_key", d.APIKey)
url := fmt.Sprintf("%s/%s?%s", d.APIURL, path, params.Encode())
// Do some basic scrubbing so sensitive information doesn't appear in logs
scrubbedUrl := strings.Replace(url, d.ClientID, "CLIENT_ID", -1)
scrubbedUrl = strings.Replace(scrubbedUrl, d.APIKey, "API_KEY", -1)
log.Printf("sending new request to digitalocean: %s", scrubbedUrl)
var lastErr error
for attempts := 1; attempts < 10; attempts++ {
resp, err := client.Get(url)
if err != nil {
return nil, err
}
body, err := ioutil.ReadAll(resp.Body)
resp.Body.Close()
if err != nil {
return nil, err
}
log.Printf("response from digitalocean: %s", body)
var decodedResponse map[string]interface{}
err = json.Unmarshal(body, &decodedResponse)
if err != nil {
err = errors.New(fmt.Sprintf("Failed to decode JSON response (HTTP %v) from DigitalOcean: %s",
resp.StatusCode, body))
return decodedResponse, err
}
// Check for errors sent by digitalocean
status := decodedResponse["status"].(string)
if status == "OK" {
return decodedResponse, nil
}
if status == "ERROR" {
statusRaw, ok := decodedResponse["error_message"]
if ok {
status = statusRaw.(string)
} else {
status = fmt.Sprintf(
"Unknown error. Full response body: %s", body)
}
}
lastErr = errors.New(fmt.Sprintf("Received error from DigitalOcean (%d): %s",
resp.StatusCode, status))
log.Println(lastErr)
if strings.Contains(status, "a pending event") {
// Retry, DigitalOcean sends these dumb "pending event"
// errors all the time.
time.Sleep(5 * time.Second)
continue
}
// Some other kind of error. Just return.
return decodedResponse, lastErr
}
return nil, lastErr
}
func (d DigitalOceanClientV1) Image(slug_or_name_or_id string) (Image, error) {
images, err := d.Images()
if err != nil {
return Image{}, err
}
for _, image := range images {
if strings.EqualFold(image.Slug, slug_or_name_or_id) {
return image, nil
}
}
for _, image := range images {
if strings.EqualFold(image.Name, slug_or_name_or_id) {
return image, nil
}
}
for _, image := range images {
id, err := strconv.Atoi(slug_or_name_or_id)
if err == nil {
if image.Id == uint(id) {
return image, nil
}
}
}
err = errors.New(fmt.Sprintf("Unknown image '%v'", slug_or_name_or_id))
return Image{}, err
}
// Returns all available regions.
func (d DigitalOceanClientV1) Regions() ([]Region, error) {
resp, err := NewRequestV1(d, "regions", url.Values{})
if err != nil {
return nil, err
}
var result RegionsResp
if err := mapstructure.Decode(resp, &result); err != nil {
return nil, err
}
return result.Regions, nil
}
func (d DigitalOceanClientV1) Region(slug_or_name_or_id string) (Region, error) {
regions, err := d.Regions()
if err != nil {
return Region{}, err
}
for _, region := range regions {
if strings.EqualFold(region.Slug, slug_or_name_or_id) {
return region, nil
}
}
for _, region := range regions {
if strings.EqualFold(region.Name, slug_or_name_or_id) {
return region, nil
}
}
for _, region := range regions {
id, err := strconv.Atoi(slug_or_name_or_id)
if err == nil {
if region.Id == uint(id) {
return region, nil
}
}
}
err = errors.New(fmt.Sprintf("Unknown region '%v'", slug_or_name_or_id))
return Region{}, err
}
// Returns all available sizes.
func (d DigitalOceanClientV1) Sizes() ([]Size, error) {
resp, err := NewRequestV1(d, "sizes", url.Values{})
if err != nil {
return nil, err
}
var result SizesResp
if err := mapstructure.Decode(resp, &result); err != nil {
return nil, err
}
return result.Sizes, nil
}
func (d DigitalOceanClientV1) Size(slug_or_name_or_id string) (Size, error) {
sizes, err := d.Sizes()
if err != nil {
return Size{}, err
}
for _, size := range sizes {
if strings.EqualFold(size.Slug, slug_or_name_or_id) {
return size, nil
}
}
for _, size := range sizes {
if strings.EqualFold(size.Name, slug_or_name_or_id) {
return size, nil
}
}
for _, size := range sizes {
id, err := strconv.Atoi(slug_or_name_or_id)
if err == nil {
if size.Id == uint(id) {
return size, nil
}
}
}
err = errors.New(fmt.Sprintf("Unknown size '%v'", slug_or_name_or_id))
return Size{}, err
}

View File

@ -1,462 +0,0 @@
// are here. Their API is on a path to V2, so just plain JSON is used
// in place of a proper client library for now.
package digitalocean
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"log"
"net/http"
"strconv"
"strings"
)
type DigitalOceanClientV2 struct {
// The http client for communicating
client *http.Client
// Credentials
APIToken string
// The base URL of the API
APIURL string
}
// Creates a new client for communicating with DO
func DigitalOceanClientNewV2(token string, url string) *DigitalOceanClientV2 {
c := &DigitalOceanClientV2{
client: &http.Client{
Transport: &http.Transport{
Proxy: http.ProxyFromEnvironment,
},
},
APIURL: url,
APIToken: token,
}
return c
}
// Creates an SSH Key and returns it's id
func (d DigitalOceanClientV2) CreateKey(name string, pub string) (uint, error) {
type KeyReq struct {
Name string `json:"name"`
PublicKey string `json:"public_key"`
}
type KeyRes struct {
SSHKey struct {
Id uint
Name string
Fingerprint string
PublicKey string `json:"public_key"`
} `json:"ssh_key"`
}
req := &KeyReq{Name: name, PublicKey: pub}
res := KeyRes{}
err := NewRequestV2(d, "v2/account/keys", "POST", req, &res)
if err != nil {
return 0, err
}
return res.SSHKey.Id, err
}
// Destroys an SSH key
func (d DigitalOceanClientV2) DestroyKey(id uint) error {
path := fmt.Sprintf("v2/account/keys/%v", id)
return NewRequestV2(d, path, "DELETE", nil, nil)
}
// Creates a droplet and returns it's id
func (d DigitalOceanClientV2) CreateDroplet(name string, size string, image string, region string, keyId uint, privateNetworking bool) (uint, error) {
type DropletReq struct {
Name string `json:"name"`
Region string `json:"region"`
Size string `json:"size"`
Image string `json:"image"`
SSHKeys []string `json:"ssh_keys,omitempty"`
Backups bool `json:"backups,omitempty"`
IPv6 bool `json:"ipv6,omitempty"`
PrivateNetworking bool `json:"private_networking,omitempty"`
}
type DropletRes struct {
Droplet struct {
Id uint
Name string
Memory uint
VCPUS uint `json:"vcpus"`
Disk uint
Region Region
Image Image
Size Size
Locked bool
CreateAt string `json:"created_at"`
Status string
Networks struct {
V4 []struct {
IPAddr string `json:"ip_address"`
Netmask string
Gateway string
Type string
} `json:"v4,omitempty"`
V6 []struct {
IPAddr string `json:"ip_address"`
CIDR uint `json:"cidr"`
Gateway string
Type string
} `json:"v6,omitempty"`
}
Kernel struct {
Id uint
Name string
Version string
}
BackupIds []uint
SnapshotIds []uint
ActionIds []uint
Features []string `json:"features,omitempty"`
}
}
req := &DropletReq{Name: name}
res := DropletRes{}
found_size, err := d.Size(size)
if err != nil {
return 0, fmt.Errorf("Invalid size or lookup failure: '%s': %s", size, err)
}
found_image, err := d.Image(image)
if err != nil {
return 0, fmt.Errorf("Invalid image or lookup failure: '%s': %s", image, err)
}
found_region, err := d.Region(region)
if err != nil {
return 0, fmt.Errorf("Invalid region or lookup failure: '%s': %s", region, err)
}
if found_image.Slug == "" {
req.Image = strconv.Itoa(int(found_image.Id))
} else {
req.Image = found_image.Slug
}
req.Size = found_size.Slug
req.Region = found_region.Slug
req.SSHKeys = []string{fmt.Sprintf("%v", keyId)}
req.PrivateNetworking = privateNetworking
err = NewRequestV2(d, "v2/droplets", "POST", req, &res)
if err != nil {
return 0, err
}
return res.Droplet.Id, err
}
// Destroys a droplet
func (d DigitalOceanClientV2) DestroyDroplet(id uint) error {
path := fmt.Sprintf("v2/droplets/%v", id)
return NewRequestV2(d, path, "DELETE", nil, nil)
}
// Powers off a droplet
func (d DigitalOceanClientV2) PowerOffDroplet(id uint) error {
type ActionReq struct {
Type string `json:"type"`
}
type ActionRes struct {
}
req := &ActionReq{Type: "power_off"}
path := fmt.Sprintf("v2/droplets/%v/actions", id)
return NewRequestV2(d, path, "POST", req, nil)
}
// Shutsdown a droplet. This is a "soft" shutdown.
func (d DigitalOceanClientV2) ShutdownDroplet(id uint) error {
type ActionReq struct {
Type string `json:"type"`
}
type ActionRes struct {
}
req := &ActionReq{Type: "shutdown"}
path := fmt.Sprintf("v2/droplets/%v/actions", id)
return NewRequestV2(d, path, "POST", req, nil)
}
// Creates a snaphot of a droplet by it's ID
func (d DigitalOceanClientV2) CreateSnapshot(id uint, name string) error {
type ActionReq struct {
Type string `json:"type"`
Name string `json:"name"`
}
type ActionRes struct {
}
req := &ActionReq{Type: "snapshot", Name: name}
path := fmt.Sprintf("v2/droplets/%v/actions", id)
return NewRequestV2(d, path, "POST", req, nil)
}
// Returns all available images.
func (d DigitalOceanClientV2) Images() ([]Image, error) {
res := ImagesResp{}
err := NewRequestV2(d, "v2/images?per_page=200", "GET", nil, &res)
if err != nil {
return nil, err
}
return res.Images, nil
}
// Destroys an image by its ID.
func (d DigitalOceanClientV2) DestroyImage(id uint) error {
path := fmt.Sprintf("v2/images/%d", id)
return NewRequestV2(d, path, "DELETE", nil, nil)
}
// Returns DO's string representation of status "off" "new" "active" etc.
func (d DigitalOceanClientV2) DropletStatus(id uint) (string, string, error) {
path := fmt.Sprintf("v2/droplets/%v", id)
type DropletRes struct {
Droplet struct {
Id uint
Name string
Memory uint
VCPUS uint `json:"vcpus"`
Disk uint
Region Region
Image Image
Size Size
Locked bool
CreateAt string `json:"created_at"`
Status string
Networks struct {
V4 []struct {
IPAddr string `json:"ip_address"`
Netmask string
Gateway string
Type string
} `json:"v4,omitempty"`
V6 []struct {
IPAddr string `json:"ip_address"`
CIDR uint `json:"cidr"`
Gateway string
Type string
} `json:"v6,omitempty"`
}
Kernel struct {
Id uint
Name string
Version string
}
BackupIds []uint
SnapshotIds []uint
ActionIds []uint
Features []string `json:"features,omitempty"`
}
}
res := DropletRes{}
err := NewRequestV2(d, path, "GET", nil, &res)
if err != nil {
return "", "", err
}
var ip string
for _, n := range res.Droplet.Networks.V4 {
if n.Type == "public" {
ip = n.IPAddr
}
}
return ip, res.Droplet.Status, err
}
// Sends an api request and returns a generic map[string]interface of
// the response.
func NewRequestV2(d DigitalOceanClientV2, path string, method string, req interface{}, res interface{}) error {
var err error
var request *http.Request
client := d.client
buf := new(bytes.Buffer)
// Add the authentication parameters
url := fmt.Sprintf("%s/%s", d.APIURL, path)
if req != nil {
enc := json.NewEncoder(buf)
enc.Encode(req)
defer buf.Reset()
request, err = http.NewRequest(method, url, buf)
request.Header.Add("Content-Type", "application/json")
} else {
request, err = http.NewRequest(method, url, nil)
}
if err != nil {
return err
}
// Add the authentication parameters
request.Header.Add("Authorization", "Bearer "+d.APIToken)
if buf != nil {
log.Printf("sending new request to digitalocean: %s buffer: %s", url, buf)
} else {
log.Printf("sending new request to digitalocean: %s", url)
}
resp, err := client.Do(request)
if err != nil {
return err
}
if method == "DELETE" && resp.StatusCode == 204 {
if resp.Body != nil {
resp.Body.Close()
}
return nil
}
if resp.Body == nil {
return errors.New("Request returned empty body")
}
body, err := ioutil.ReadAll(resp.Body)
resp.Body.Close()
if err != nil {
return err
}
log.Printf("response from digitalocean: %s", body)
err = json.Unmarshal(body, &res)
if err != nil {
return errors.New(fmt.Sprintf("Failed to decode JSON response %s (HTTP %v) from DigitalOcean: %s", err.Error(),
resp.StatusCode, body))
}
switch resp.StatusCode {
case 403, 401, 429, 422, 404, 503, 500:
return errors.New(fmt.Sprintf("digitalocean request error: %+v", res))
}
return nil
}
func (d DigitalOceanClientV2) Image(slug_or_name_or_id string) (Image, error) {
images, err := d.Images()
if err != nil {
return Image{}, err
}
for _, image := range images {
if strings.EqualFold(image.Slug, slug_or_name_or_id) {
return image, nil
}
}
for _, image := range images {
if strings.EqualFold(image.Name, slug_or_name_or_id) {
return image, nil
}
}
for _, image := range images {
id, err := strconv.Atoi(slug_or_name_or_id)
if err == nil {
if image.Id == uint(id) {
return image, nil
}
}
}
err = errors.New(fmt.Sprintf("Unknown image '%v'", slug_or_name_or_id))
return Image{}, err
}
// Returns all available regions.
func (d DigitalOceanClientV2) Regions() ([]Region, error) {
res := RegionsResp{}
err := NewRequestV2(d, "v2/regions?per_page=200", "GET", nil, &res)
if err != nil {
return nil, err
}
return res.Regions, nil
}
func (d DigitalOceanClientV2) Region(slug_or_name_or_id string) (Region, error) {
regions, err := d.Regions()
if err != nil {
return Region{}, err
}
for _, region := range regions {
if strings.EqualFold(region.Slug, slug_or_name_or_id) {
return region, nil
}
}
for _, region := range regions {
if strings.EqualFold(region.Name, slug_or_name_or_id) {
return region, nil
}
}
for _, region := range regions {
id, err := strconv.Atoi(slug_or_name_or_id)
if err == nil {
if region.Id == uint(id) {
return region, nil
}
}
}
err = errors.New(fmt.Sprintf("Unknown region '%v'", slug_or_name_or_id))
return Region{}, err
}
// Returns all available sizes.
func (d DigitalOceanClientV2) Sizes() ([]Size, error) {
res := SizesResp{}
err := NewRequestV2(d, "v2/sizes?per_page=200", "GET", nil, &res)
if err != nil {
return nil, err
}
return res.Sizes, nil
}
func (d DigitalOceanClientV2) Size(slug_or_name_or_id string) (Size, error) {
sizes, err := d.Sizes()
if err != nil {
return Size{}, err
}
for _, size := range sizes {
if strings.EqualFold(size.Slug, slug_or_name_or_id) {
return size, nil
}
}
for _, size := range sizes {
if strings.EqualFold(size.Name, slug_or_name_or_id) {
return size, nil
}
}
for _, size := range sizes {
id, err := strconv.Atoi(slug_or_name_or_id)
if err == nil {
if size.Id == uint(id) {
return size, nil
}
}
}
err = errors.New(fmt.Sprintf("Unknown size '%v'", slug_or_name_or_id))
return Size{}, err
}

View File

@ -4,6 +4,8 @@ import (
"fmt"
"log"
"strconv"
"github.com/digitalocean/godo"
)
type Artifact struct {
@ -11,13 +13,13 @@ type Artifact struct {
snapshotName string
// The ID of the image
snapshotId uint
snapshotId int
// The name of the region
regionName string
// The client for making API calls
client DigitalOceanClient
client *godo.Client
}
func (*Artifact) BuilderId() string {
@ -43,5 +45,6 @@ func (a *Artifact) State(name string) interface{} {
func (a *Artifact) Destroy() error {
log.Printf("Destroying image: %d (%s)", a.snapshotId, a.snapshotName)
return a.client.DestroyImage(a.snapshotId)
_, err := a.client.Images.Delete(a.snapshotId)
return err
}

View File

@ -4,208 +4,39 @@
package digitalocean
import (
"errors"
"fmt"
"log"
"os"
"time"
"github.com/digitalocean/godo"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/common"
"github.com/mitchellh/packer/common/uuid"
"github.com/mitchellh/packer/helper/config"
"github.com/mitchellh/packer/packer"
"github.com/mitchellh/packer/template/interpolate"
"golang.org/x/oauth2"
)
// see https://api.digitalocean.com/images/?client_id=[client_id]&api_key=[api_key]
// name="Ubuntu 12.04.4 x64", id=6374128,
const DefaultImage = "ubuntu-12-04-x64"
// see https://api.digitalocean.com/regions/?client_id=[client_id]&api_key=[api_key]
// name="New York 3", id=8
const DefaultRegion = "nyc3"
// see https://api.digitalocean.com/sizes/?client_id=[client_id]&api_key=[api_key]
// name="512MB", id=66 (the smallest droplet size)
const DefaultSize = "512mb"
// The unique id for the builder
const BuilderId = "pearkes.digitalocean"
// Configuration tells the builder the credentials
// to use while communicating with DO and describes the image
// you are creating
type Config struct {
common.PackerConfig `mapstructure:",squash"`
ClientID string `mapstructure:"client_id"`
APIKey string `mapstructure:"api_key"`
APIURL string `mapstructure:"api_url"`
APIToken string `mapstructure:"api_token"`
RegionID uint `mapstructure:"region_id"`
SizeID uint `mapstructure:"size_id"`
ImageID uint `mapstructure:"image_id"`
Region string `mapstructure:"region"`
Size string `mapstructure:"size"`
Image string `mapstructure:"image"`
PrivateNetworking bool `mapstructure:"private_networking"`
SnapshotName string `mapstructure:"snapshot_name"`
DropletName string `mapstructure:"droplet_name"`
SSHUsername string `mapstructure:"ssh_username"`
SSHPort uint `mapstructure:"ssh_port"`
RawSSHTimeout string `mapstructure:"ssh_timeout"`
RawStateTimeout string `mapstructure:"state_timeout"`
// These are unexported since they're set by other fields
// being set.
sshTimeout time.Duration
stateTimeout time.Duration
ctx *interpolate.Context
}
type Builder struct {
config Config
runner multistep.Runner
}
func (b *Builder) Prepare(raws ...interface{}) ([]string, error) {
err := config.Decode(&b.config, &config.DecodeOpts{
Interpolate: true,
}, raws...)
if err != nil {
return nil, err
c, warnings, errs := NewConfig(raws...)
if errs != nil {
return warnings, errs
}
b.config = *c
// Optional configuration with defaults
if b.config.APIKey == "" {
// Default to environment variable for api_key, if it exists
b.config.APIKey = os.Getenv("DIGITALOCEAN_API_KEY")
}
if b.config.ClientID == "" {
// Default to environment variable for client_id, if it exists
b.config.ClientID = os.Getenv("DIGITALOCEAN_CLIENT_ID")
}
if b.config.APIURL == "" {
// Default to environment variable for api_url, if it exists
b.config.APIURL = os.Getenv("DIGITALOCEAN_API_URL")
}
if b.config.APIToken == "" {
// Default to environment variable for api_token, if it exists
b.config.APIToken = os.Getenv("DIGITALOCEAN_API_TOKEN")
}
if b.config.Region == "" {
if b.config.RegionID != 0 {
b.config.Region = fmt.Sprintf("%v", b.config.RegionID)
} else {
b.config.Region = DefaultRegion
}
}
if b.config.Size == "" {
if b.config.SizeID != 0 {
b.config.Size = fmt.Sprintf("%v", b.config.SizeID)
} else {
b.config.Size = DefaultSize
}
}
if b.config.Image == "" {
if b.config.ImageID != 0 {
b.config.Image = fmt.Sprintf("%v", b.config.ImageID)
} else {
b.config.Image = DefaultImage
}
}
if b.config.SnapshotName == "" {
// Default to packer-{{ unix timestamp (utc) }}
b.config.SnapshotName = "packer-{{timestamp}}"
}
if b.config.DropletName == "" {
// Default to packer-[time-ordered-uuid]
b.config.DropletName = fmt.Sprintf("packer-%s", uuid.TimeOrderedUUID())
}
if b.config.SSHUsername == "" {
// Default to "root". You can override this if your
// SourceImage has a different user account then the DO default
b.config.SSHUsername = "root"
}
if b.config.SSHPort == 0 {
// Default to port 22 per DO default
b.config.SSHPort = 22
}
if b.config.RawSSHTimeout == "" {
// Default to 1 minute timeouts
b.config.RawSSHTimeout = "1m"
}
if b.config.RawStateTimeout == "" {
// Default to 6 minute timeouts waiting for
// desired state. i.e waiting for droplet to become active
b.config.RawStateTimeout = "6m"
}
var errs *packer.MultiError
if b.config.APIToken == "" {
// Required configurations that will display errors if not set
if b.config.ClientID == "" {
errs = packer.MultiErrorAppend(
errs, errors.New("a client_id for v1 auth or api_token for v2 auth must be specified"))
}
if b.config.APIKey == "" {
errs = packer.MultiErrorAppend(
errs, errors.New("a api_key for v1 auth or api_token for v2 auth must be specified"))
}
}
if b.config.APIURL == "" {
b.config.APIURL = "https://api.digitalocean.com"
}
sshTimeout, err := time.ParseDuration(b.config.RawSSHTimeout)
if err != nil {
errs = packer.MultiErrorAppend(
errs, fmt.Errorf("Failed parsing ssh_timeout: %s", err))
}
b.config.sshTimeout = sshTimeout
stateTimeout, err := time.ParseDuration(b.config.RawStateTimeout)
if err != nil {
errs = packer.MultiErrorAppend(
errs, fmt.Errorf("Failed parsing state_timeout: %s", err))
}
b.config.stateTimeout = stateTimeout
if errs != nil && len(errs.Errors) > 0 {
return nil, errs
}
common.ScrubConfig(b.config, b.config.ClientID, b.config.APIKey)
return nil, nil
}
func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packer.Artifact, error) {
var client DigitalOceanClient
// Initialize the DO API client
if b.config.APIToken == "" {
client = DigitalOceanClientNewV1(b.config.ClientID, b.config.APIKey, b.config.APIURL)
} else {
client = DigitalOceanClientNewV2(b.config.APIToken, b.config.APIURL)
}
client := godo.NewClient(oauth2.NewClient(oauth2.NoContext, &apiTokenSource{
AccessToken: b.config.APIToken,
}))
// Set up the state
state := new(multistep.BasicStateBag)
@ -216,7 +47,10 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
// Build the steps
steps := []multistep.Step{
new(stepCreateSSHKey),
&stepCreateSSHKey{
Debug: b.config.PackerDebug,
DebugKeyPath: fmt.Sprintf("do_%s.pem", b.config.PackerBuildName),
},
new(stepCreateDroplet),
new(stepDropletInfo),
&common.StepConnectSSH{
@ -252,26 +86,10 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
return nil, nil
}
sregion := state.Get("region")
var region string
if sregion != nil {
region = sregion.(string)
} else {
region = fmt.Sprintf("%v", state.Get("region_id").(uint))
}
found_region, err := client.Region(region)
if err != nil {
return nil, err
}
artifact := &Artifact{
snapshotName: state.Get("snapshot_name").(string),
snapshotId: state.Get("snapshot_image_id").(uint),
regionName: found_region.Name,
snapshotId: state.Get("snapshot_image_id").(int),
regionName: state.Get("region").(string),
client: client,
}

View File

@ -0,0 +1,33 @@
package digitalocean
import (
"os"
"testing"
builderT "github.com/mitchellh/packer/helper/builder/testing"
)
func TestBuilderAcc_basic(t *testing.T) {
builderT.Test(t, builderT.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Builder: &Builder{},
Template: testBuilderAccBasic,
})
}
func testAccPreCheck(t *testing.T) {
if v := os.Getenv("DIGITALOCEAN_API_TOKEN"); v == "" {
t.Fatal("DIGITALOCEAN_API_TOKEN must be set for acceptance tests")
}
}
const testBuilderAccBasic = `
{
"builders": [{
"type": "test",
"region": "nyc2",
"size": "512mb",
"image": "ubuntu-12-04-x64"
}]
}
`

View File

@ -1,22 +1,18 @@
package digitalocean
import (
"github.com/mitchellh/packer/packer"
"os"
"strconv"
"testing"
)
func init() {
// Clear out the credential env vars
os.Setenv("DIGITALOCEAN_API_KEY", "")
os.Setenv("DIGITALOCEAN_CLIENT_ID", "")
}
"github.com/mitchellh/packer/packer"
)
func testConfig() map[string]interface{} {
return map[string]interface{}{
"client_id": "foo",
"api_key": "bar",
"api_token": "bar",
"region": "nyc2",
"size": "512mb",
"image": "foo",
}
}
@ -43,90 +39,6 @@ func TestBuilder_Prepare_BadType(t *testing.T) {
}
}
func TestBuilderPrepare_APIKey(t *testing.T) {
var b Builder
config := testConfig()
// Test good
config["api_key"] = "foo"
warnings, err := b.Prepare(config)
if len(warnings) > 0 {
t.Fatalf("bad: %#v", warnings)
}
if err != nil {
t.Fatalf("should not have error: %s", err)
}
if b.config.APIKey != "foo" {
t.Errorf("access key invalid: %s", b.config.APIKey)
}
// Test bad
delete(config, "api_key")
b = Builder{}
warnings, err = b.Prepare(config)
if len(warnings) > 0 {
t.Fatalf("bad: %#v", warnings)
}
if err == nil {
t.Fatal("should have error")
}
// Test env variable
delete(config, "api_key")
os.Setenv("DIGITALOCEAN_API_KEY", "foo")
defer os.Setenv("DIGITALOCEAN_API_KEY", "")
warnings, err = b.Prepare(config)
if len(warnings) > 0 {
t.Fatalf("bad: %#v", warnings)
}
if err != nil {
t.Fatalf("should not have error: %s", err)
}
}
func TestBuilderPrepare_ClientID(t *testing.T) {
var b Builder
config := testConfig()
// Test good
config["client_id"] = "foo"
warnings, err := b.Prepare(config)
if len(warnings) > 0 {
t.Fatalf("bad: %#v", warnings)
}
if err != nil {
t.Fatalf("should not have error: %s", err)
}
if b.config.ClientID != "foo" {
t.Errorf("invalid: %s", b.config.ClientID)
}
// Test bad
delete(config, "client_id")
b = Builder{}
warnings, err = b.Prepare(config)
if len(warnings) > 0 {
t.Fatalf("bad: %#v", warnings)
}
if err == nil {
t.Fatal("should have error")
}
// Test env variable
delete(config, "client_id")
os.Setenv("DIGITALOCEAN_CLIENT_ID", "foo")
defer os.Setenv("DIGITALOCEAN_CLIENT_ID", "")
warnings, err = b.Prepare(config)
if len(warnings) > 0 {
t.Fatalf("bad: %#v", warnings)
}
if err != nil {
t.Fatalf("should not have error: %s", err)
}
}
func TestBuilderPrepare_InvalidKey(t *testing.T) {
var b Builder
config := testConfig()
@ -147,22 +59,18 @@ func TestBuilderPrepare_Region(t *testing.T) {
config := testConfig()
// Test default
delete(config, "region")
warnings, err := b.Prepare(config)
if len(warnings) > 0 {
t.Fatalf("bad: %#v", warnings)
}
if err != nil {
t.Fatalf("should not have error: %s", err)
}
if b.config.Region != DefaultRegion {
t.Errorf("found %s, expected %s", b.config.Region, DefaultRegion)
if err == nil {
t.Fatalf("should error")
}
expected := "sfo1"
// Test set
config["region_id"] = 0
config["region"] = expected
b = Builder{}
warnings, err = b.Prepare(config)
@ -183,22 +91,18 @@ func TestBuilderPrepare_Size(t *testing.T) {
config := testConfig()
// Test default
delete(config, "size")
warnings, err := b.Prepare(config)
if len(warnings) > 0 {
t.Fatalf("bad: %#v", warnings)
}
if err != nil {
t.Fatalf("should not have error: %s", err)
}
if b.config.Size != DefaultSize {
t.Errorf("found %s, expected %s", b.config.Size, DefaultSize)
if err == nil {
t.Fatalf("should error")
}
expected := "1024mb"
// Test set
config["size_id"] = 0
config["size"] = expected
b = Builder{}
warnings, err = b.Prepare(config)
@ -219,22 +123,18 @@ func TestBuilderPrepare_Image(t *testing.T) {
config := testConfig()
// Test default
delete(config, "image")
warnings, err := b.Prepare(config)
if len(warnings) > 0 {
t.Fatalf("bad: %#v", warnings)
}
if err != nil {
t.Fatalf("should not have error: %s", err)
}
if b.config.Image != DefaultImage {
t.Errorf("found %s, expected %s", b.config.Image, DefaultImage)
if err == nil {
t.Fatal("should error")
}
expected := "ubuntu-14-04-x64"
// Test set
config["image_id"] = 0
config["image"] = expected
b = Builder{}
warnings, err = b.Prepare(config)

View File

@ -0,0 +1,146 @@
package digitalocean
import (
"errors"
"fmt"
"os"
"time"
"github.com/mitchellh/mapstructure"
"github.com/mitchellh/packer/common"
"github.com/mitchellh/packer/common/uuid"
"github.com/mitchellh/packer/helper/config"
"github.com/mitchellh/packer/packer"
"github.com/mitchellh/packer/template/interpolate"
)
type Config struct {
common.PackerConfig `mapstructure:",squash"`
APIToken string `mapstructure:"api_token"`
Region string `mapstructure:"region"`
Size string `mapstructure:"size"`
Image string `mapstructure:"image"`
PrivateNetworking bool `mapstructure:"private_networking"`
SnapshotName string `mapstructure:"snapshot_name"`
DropletName string `mapstructure:"droplet_name"`
UserData string `mapstructure:"user_data"`
SSHUsername string `mapstructure:"ssh_username"`
SSHPort uint `mapstructure:"ssh_port"`
RawSSHTimeout string `mapstructure:"ssh_timeout"`
RawStateTimeout string `mapstructure:"state_timeout"`
// These are unexported since they're set by other fields
// being set.
sshTimeout time.Duration
stateTimeout time.Duration
ctx *interpolate.Context
}
func NewConfig(raws ...interface{}) (*Config, []string, error) {
c := new(Config)
var md mapstructure.Metadata
err := config.Decode(c, &config.DecodeOpts{
Metadata: &md,
Interpolate: true,
InterpolateFilter: &interpolate.RenderFilter{
Exclude: []string{
"run_command",
},
},
}, raws...)
if err != nil {
return nil, nil, err
}
// Defaults
if c.APIToken == "" {
// Default to environment variable for api_token, if it exists
c.APIToken = os.Getenv("DIGITALOCEAN_API_TOKEN")
}
if c.SnapshotName == "" {
def, err := interpolate.Render("packer-{{timestamp}}", nil)
if err != nil {
panic(err)
}
// Default to packer-{{ unix timestamp (utc) }}
c.SnapshotName = def
}
if c.DropletName == "" {
// Default to packer-[time-ordered-uuid]
c.DropletName = fmt.Sprintf("packer-%s", uuid.TimeOrderedUUID())
}
if c.SSHUsername == "" {
// Default to "root". You can override this if your
// SourceImage has a different user account then the DO default
c.SSHUsername = "root"
}
if c.SSHPort == 0 {
// Default to port 22 per DO default
c.SSHPort = 22
}
if c.RawSSHTimeout == "" {
// Default to 1 minute timeouts
c.RawSSHTimeout = "1m"
}
if c.RawStateTimeout == "" {
// Default to 6 minute timeouts waiting for
// desired state. i.e waiting for droplet to become active
c.RawStateTimeout = "6m"
}
var errs *packer.MultiError
if c.APIToken == "" {
// Required configurations that will display errors if not set
errs = packer.MultiErrorAppend(
errs, errors.New("api_token for auth must be specified"))
}
if c.Region == "" {
errs = packer.MultiErrorAppend(
errs, errors.New("region is required"))
}
if c.Size == "" {
errs = packer.MultiErrorAppend(
errs, errors.New("size is required"))
}
if c.Image == "" {
errs = packer.MultiErrorAppend(
errs, errors.New("image is required"))
}
sshTimeout, err := time.ParseDuration(c.RawSSHTimeout)
if err != nil {
errs = packer.MultiErrorAppend(
errs, fmt.Errorf("Failed parsing ssh_timeout: %s", err))
}
c.sshTimeout = sshTimeout
stateTimeout, err := time.ParseDuration(c.RawStateTimeout)
if err != nil {
errs = packer.MultiErrorAppend(
errs, fmt.Errorf("Failed parsing state_timeout: %s", err))
}
c.stateTimeout = stateTimeout
if errs != nil && len(errs.Errors) > 0 {
return nil, nil, errs
}
common.ScrubConfig(c, c.APIToken)
return c, nil, nil
}

View File

@ -3,25 +3,36 @@ package digitalocean
import (
"fmt"
"github.com/digitalocean/godo"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
)
type stepCreateDroplet struct {
dropletId uint
dropletId int
}
func (s *stepCreateDroplet) Run(state multistep.StateBag) multistep.StepAction {
client := state.Get("client").(DigitalOceanClient)
client := state.Get("client").(*godo.Client)
ui := state.Get("ui").(packer.Ui)
c := state.Get("config").(Config)
sshKeyId := state.Get("ssh_key_id").(uint)
ui.Say("Creating droplet...")
sshKeyId := state.Get("ssh_key_id").(int)
// Create the droplet based on configuration
dropletId, err := client.CreateDroplet(c.DropletName, c.Size, c.Image, c.Region, sshKeyId, c.PrivateNetworking)
ui.Say("Creating droplet...")
droplet, _, err := client.Droplets.Create(&godo.DropletCreateRequest{
Name: c.DropletName,
Region: c.Region,
Size: c.Size,
Image: godo.DropletCreateImage{
Slug: c.Image,
},
SSHKeys: []godo.DropletCreateSSHKey{
godo.DropletCreateSSHKey{ID: int(sshKeyId)},
},
PrivateNetworking: c.PrivateNetworking,
UserData: c.UserData,
})
if err != nil {
err := fmt.Errorf("Error creating droplet: %s", err)
state.Put("error", err)
@ -30,10 +41,10 @@ func (s *stepCreateDroplet) Run(state multistep.StateBag) multistep.StepAction {
}
// We use this in cleanup
s.dropletId = dropletId
s.dropletId = droplet.ID
// Store the droplet id for later
state.Put("droplet_id", dropletId)
state.Put("droplet_id", droplet.ID)
return multistep.ActionContinue
}
@ -44,19 +55,14 @@ func (s *stepCreateDroplet) Cleanup(state multistep.StateBag) {
return
}
client := state.Get("client").(DigitalOceanClient)
client := state.Get("client").(*godo.Client)
ui := state.Get("ui").(packer.Ui)
c := state.Get("config").(Config)
// Destroy the droplet we just created
ui.Say("Destroying droplet...")
err := client.DestroyDroplet(s.dropletId)
_, err := client.Droplets.Delete(s.dropletId)
if err != nil {
curlstr := fmt.Sprintf("curl '%v/droplets/%v/destroy?client_id=%v&api_key=%v'",
c.APIURL, s.dropletId, c.ClientID, c.APIKey)
ui.Error(fmt.Sprintf(
"Error destroying droplet. Please destroy it manually: %v", curlstr))
"Error destroying droplet. Please destroy it manually: %s", err))
}
}

View File

@ -7,19 +7,25 @@ import (
"encoding/pem"
"fmt"
"log"
"os"
"runtime"
"code.google.com/p/gosshold/ssh"
"github.com/digitalocean/godo"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/common/uuid"
"github.com/mitchellh/packer/packer"
)
type stepCreateSSHKey struct {
keyId uint
Debug bool
DebugKeyPath string
keyId int
}
func (s *stepCreateSSHKey) Run(state multistep.StateBag) multistep.StepAction {
client := state.Get("client").(DigitalOceanClient)
client := state.Get("client").(*godo.Client)
ui := state.Get("ui").(packer.Ui)
ui.Say("Creating temporary ssh key for droplet...")
@ -46,7 +52,10 @@ func (s *stepCreateSSHKey) Run(state multistep.StateBag) multistep.StepAction {
name := fmt.Sprintf("packer-%s", uuid.TimeOrderedUUID())
// Create the key!
keyId, err := client.CreateKey(name, pub_sshformat)
key, _, err := client.Keys.Create(&godo.KeyCreateRequest{
Name: name,
PublicKey: pub_sshformat,
})
if err != nil {
err := fmt.Errorf("Error creating temporary SSH key: %s", err)
state.Put("error", err)
@ -55,12 +64,37 @@ func (s *stepCreateSSHKey) Run(state multistep.StateBag) multistep.StepAction {
}
// We use this to check cleanup
s.keyId = keyId
s.keyId = key.ID
log.Printf("temporary ssh key name: %s", name)
// Remember some state for the future
state.Put("ssh_key_id", keyId)
state.Put("ssh_key_id", key.ID)
// If we're in debug mode, output the private key to the working directory.
if s.Debug {
ui.Message(fmt.Sprintf("Saving key for debug purposes: %s", s.DebugKeyPath))
f, err := os.Create(s.DebugKeyPath)
if err != nil {
state.Put("error", fmt.Errorf("Error saving debug key: %s", err))
return multistep.ActionHalt
}
defer f.Close()
// Write the key out
if _, err := f.Write(pem.EncodeToMemory(&priv_blk)); err != nil {
state.Put("error", fmt.Errorf("Error saving debug key: %s", err))
return multistep.ActionHalt
}
// Chmod it so that it is SSH ready
if runtime.GOOS != "windows" {
if err := f.Chmod(0600); err != nil {
state.Put("error", fmt.Errorf("Error setting permissions of debug key: %s", err))
return multistep.ActionHalt
}
}
}
return multistep.ActionContinue
}
@ -71,18 +105,14 @@ func (s *stepCreateSSHKey) Cleanup(state multistep.StateBag) {
return
}
client := state.Get("client").(DigitalOceanClient)
client := state.Get("client").(*godo.Client)
ui := state.Get("ui").(packer.Ui)
c := state.Get("config").(Config)
ui.Say("Deleting temporary ssh key...")
err := client.DestroyKey(s.keyId)
curlstr := fmt.Sprintf("curl -H 'Authorization: Bearer #TOKEN#' -X DELETE '%v/v2/account/keys/%v'", c.APIURL, s.keyId)
_, err := client.Keys.DeleteByID(s.keyId)
if err != nil {
log.Printf("Error cleaning up ssh key: %v", err.Error())
log.Printf("Error cleaning up ssh key: %s", err)
ui.Error(fmt.Sprintf(
"Error cleaning up ssh key. Please delete the key manually: %v", curlstr))
"Error cleaning up ssh key. Please delete the key manually: %s", err))
}
}

View File

@ -3,6 +3,7 @@ package digitalocean
import (
"fmt"
"github.com/digitalocean/godo"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
)
@ -10,10 +11,10 @@ import (
type stepDropletInfo struct{}
func (s *stepDropletInfo) Run(state multistep.StateBag) multistep.StepAction {
client := state.Get("client").(DigitalOceanClient)
client := state.Get("client").(*godo.Client)
ui := state.Get("ui").(packer.Ui)
c := state.Get("config").(Config)
dropletId := state.Get("droplet_id").(uint)
dropletId := state.Get("droplet_id").(int)
ui.Say("Waiting for droplet to become active...")
@ -26,16 +27,25 @@ func (s *stepDropletInfo) Run(state multistep.StateBag) multistep.StepAction {
}
// Set the IP on the state for later
ip, _, err := client.DropletStatus(dropletId)
droplet, _, err := client.Droplets.Get(dropletId)
if err != nil {
err := fmt.Errorf("Error retrieving droplet ID: %s", err)
err := fmt.Errorf("Error retrieving droplet: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
state.Put("droplet_ip", ip)
// Verify we have an IPv4 address
invalid := droplet.Networks == nil ||
len(droplet.Networks.V4) == 0
if invalid {
err := fmt.Errorf("IPv4 address not found for droplet!")
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
state.Put("droplet_ip", droplet.Networks.V4[0].IPAddress)
return multistep.ActionContinue
}

View File

@ -3,7 +3,9 @@ package digitalocean
import (
"fmt"
"log"
"time"
"github.com/digitalocean/godo"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
)
@ -11,12 +13,12 @@ import (
type stepPowerOff struct{}
func (s *stepPowerOff) Run(state multistep.StateBag) multistep.StepAction {
client := state.Get("client").(DigitalOceanClient)
client := state.Get("client").(*godo.Client)
c := state.Get("config").(Config)
ui := state.Get("ui").(packer.Ui)
dropletId := state.Get("droplet_id").(uint)
dropletId := state.Get("droplet_id").(int)
_, status, err := client.DropletStatus(dropletId)
droplet, _, err := client.Droplets.Get(dropletId)
if err != nil {
err := fmt.Errorf("Error checking droplet state: %s", err)
state.Put("error", err)
@ -24,14 +26,14 @@ func (s *stepPowerOff) Run(state multistep.StateBag) multistep.StepAction {
return multistep.ActionHalt
}
if status == "off" {
if droplet.Status == "off" {
// Droplet is already off, don't do anything
return multistep.ActionContinue
}
// Pull the plug on the Droplet
ui.Say("Forcefully shutting down Droplet...")
err = client.PowerOffDroplet(dropletId)
_, _, err = client.DropletActions.PowerOff(dropletId)
if err != nil {
err := fmt.Errorf("Error powering off droplet: %s", err)
state.Put("error", err)
@ -47,6 +49,15 @@ func (s *stepPowerOff) Run(state multistep.StateBag) multistep.StepAction {
return multistep.ActionHalt
}
// Wait for the droplet to become unlocked for future steps
if err := waitForDropletUnlocked(client, dropletId, 2*time.Minute); err != nil {
// If we get an error the first time, actually report it
err := fmt.Errorf("Error powering off droplet: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
return multistep.ActionContinue
}

View File

@ -5,6 +5,7 @@ import (
"log"
"time"
"github.com/digitalocean/godo"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
)
@ -12,16 +13,16 @@ import (
type stepShutdown struct{}
func (s *stepShutdown) Run(state multistep.StateBag) multistep.StepAction {
client := state.Get("client").(DigitalOceanClient)
client := state.Get("client").(*godo.Client)
ui := state.Get("ui").(packer.Ui)
dropletId := state.Get("droplet_id").(uint)
dropletId := state.Get("droplet_id").(int)
// Gracefully power off the droplet. We have to retry this a number
// of times because sometimes it says it completed when it actually
// did absolutely nothing (*ALAKAZAM!* magic!). We give up after
// a pretty arbitrary amount of time.
ui.Say("Gracefully shutting down droplet...")
err := client.ShutdownDroplet(dropletId)
_, _, err := client.DropletActions.Shutdown(dropletId)
if err != nil {
// If we get an error the first time, actually report it
err := fmt.Errorf("Error shutting down droplet: %s", err)
@ -48,7 +49,7 @@ func (s *stepShutdown) Run(state multistep.StateBag) multistep.StepAction {
for attempts := 2; attempts > 0; attempts++ {
log.Printf("ShutdownDroplet attempt #%d...", attempts)
err := client.ShutdownDroplet(dropletId)
_, _, err := client.DropletActions.Shutdown(dropletId)
if err != nil {
log.Printf("Shutdown retry error: %s", err)
}
@ -64,7 +65,19 @@ func (s *stepShutdown) Run(state multistep.StateBag) multistep.StepAction {
err = waitForDropletState("off", dropletId, client, 2*time.Minute)
if err != nil {
log.Printf("Error waiting for graceful off: %s", err)
// If we get an error the first time, actually report it
err := fmt.Errorf("Error shutting down droplet: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
if err := waitForDropletUnlocked(client, dropletId, 2*time.Minute); err != nil {
// If we get an error the first time, actually report it
err := fmt.Errorf("Error shutting down droplet: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
return multistep.ActionContinue

View File

@ -4,7 +4,9 @@ import (
"errors"
"fmt"
"log"
"time"
"github.com/digitalocean/godo"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
)
@ -12,13 +14,13 @@ import (
type stepSnapshot struct{}
func (s *stepSnapshot) Run(state multistep.StateBag) multistep.StepAction {
client := state.Get("client").(DigitalOceanClient)
client := state.Get("client").(*godo.Client)
ui := state.Get("ui").(packer.Ui)
c := state.Get("config").(Config)
dropletId := state.Get("droplet_id").(uint)
dropletId := state.Get("droplet_id").(int)
ui.Say(fmt.Sprintf("Creating snapshot: %v", c.SnapshotName))
err := client.CreateSnapshot(dropletId, c.SnapshotName)
_, _, err := client.DropletActions.Snapshot(dropletId, c.SnapshotName)
if err != nil {
err := fmt.Errorf("Error creating snapshot: %s", err)
state.Put("error", err)
@ -26,6 +28,18 @@ func (s *stepSnapshot) Run(state multistep.StateBag) multistep.StepAction {
return multistep.ActionHalt
}
// Wait for the droplet to become unlocked first. For snapshots
// this can end up taking quite a long time, so we hardcode this to
// 10 minutes.
if err := waitForDropletUnlocked(client, dropletId, 10*time.Minute); err != nil {
// If we get an error the first time, actually report it
err := fmt.Errorf("Error shutting down droplet: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
// With the pending state over, verify that we're in the active state
ui.Say("Waiting for snapshot to complete...")
err = waitForDropletState("active", dropletId, client, c.stateTimeout)
if err != nil {
@ -36,7 +50,7 @@ func (s *stepSnapshot) Run(state multistep.StateBag) multistep.StepAction {
}
log.Printf("Looking up snapshot ID for snapshot: %s", c.SnapshotName)
images, err := client.Images()
images, _, err := client.Images.ListUser(&godo.ListOptions{PerPage: 200})
if err != nil {
err := fmt.Errorf("Error looking up snapshot ID: %s", err)
state.Put("error", err)
@ -44,10 +58,10 @@ func (s *stepSnapshot) Run(state multistep.StateBag) multistep.StepAction {
return multistep.ActionHalt
}
var imageId uint
var imageId int
for _, image := range images {
if image.Name == c.SnapshotName {
imageId = image.Id
imageId = image.ID
break
}
}
@ -60,7 +74,6 @@ func (s *stepSnapshot) Run(state multistep.StateBag) multistep.StepAction {
}
log.Printf("Snapshot image ID: %d", imageId)
state.Put("snapshot_image_id", imageId)
state.Put("snapshot_name", c.SnapshotName)
state.Put("region", c.Region)

View File

@ -0,0 +1,15 @@
package digitalocean
import (
"golang.org/x/oauth2"
)
type apiTokenSource struct {
AccessToken string
}
func (t *apiTokenSource) Token() (*oauth2.Token, error) {
return &oauth2.Token{
AccessToken: t.AccessToken,
}, nil
}

View File

@ -4,11 +4,64 @@ import (
"fmt"
"log"
"time"
"github.com/digitalocean/godo"
)
// waitForDropletUnlocked waits for the Droplet to be unlocked to
// avoid "pending" errors when making state changes.
func waitForDropletUnlocked(
client *godo.Client, dropletId int, timeout time.Duration) error {
done := make(chan struct{})
defer close(done)
result := make(chan error, 1)
go func() {
attempts := 0
for {
attempts += 1
log.Printf("[DEBUG] Checking droplet lock state... (attempt: %d)", attempts)
droplet, _, err := client.Droplets.Get(dropletId)
if err != nil {
result <- err
return
}
if !droplet.Locked {
result <- nil
return
}
// Wait 3 seconds in between
time.Sleep(3 * time.Second)
// Verify we shouldn't exit
select {
case <-done:
// We finished, so just exit the goroutine
return
default:
// Keep going
}
}
}()
log.Printf("[DEBUG] Waiting for up to %d seconds for droplet to unlock", timeout/time.Second)
select {
case err := <-result:
return err
case <-time.After(timeout):
return fmt.Errorf(
"Timeout while waiting to for droplet to unlock")
}
}
// waitForState simply blocks until the droplet is in
// a state we expect, while eventually timing out.
func waitForDropletState(desiredState string, dropletId uint, client DigitalOceanClient, timeout time.Duration) error {
func waitForDropletState(
desiredState string, dropletId int,
client *godo.Client, timeout time.Duration) error {
done := make(chan struct{})
defer close(done)
@ -19,13 +72,13 @@ func waitForDropletState(desiredState string, dropletId uint, client DigitalOcea
attempts += 1
log.Printf("Checking droplet status... (attempt: %d)", attempts)
_, status, err := client.DropletStatus(dropletId)
droplet, _, err := client.Droplets.Get(dropletId)
if err != nil {
result <- err
return
}
if status == desiredState {
if droplet.Status == desiredState {
result <- nil
return
}

View File

@ -31,10 +31,10 @@ type Config struct {
}
func NewConfig(raws ...interface{}) (*Config, []string, error) {
var c Config
c := new(Config)
var md mapstructure.Metadata
err := config.Decode(&c, &config.DecodeOpts{
err := config.Decode(c, &config.DecodeOpts{
Metadata: &md,
Interpolate: true,
InterpolateFilter: &interpolate.RenderFilter{
@ -91,5 +91,5 @@ func NewConfig(raws ...interface{}) (*Config, []string, error) {
return nil, nil, errs
}
return &c, nil, nil
return c, nil, nil
}

View File

@ -47,7 +47,7 @@ type Config struct {
func NewConfig(raws ...interface{}) (*Config, []string, error) {
c := new(Config)
err := config.Decode(&c, &config.DecodeOpts{
err := config.Decode(c, &config.DecodeOpts{
Interpolate: true,
InterpolateFilter: &interpolate.RenderFilter{
Exclude: []string{

View File

@ -21,7 +21,7 @@ type Config struct {
func NewConfig(raws ...interface{}) (*Config, []string, error) {
c := new(Config)
err := config.Decode(&c, &config.DecodeOpts{
err := config.Decode(c, &config.DecodeOpts{
Interpolate: true,
InterpolateFilter: &interpolate.RenderFilter{
Exclude: []string{

View File

@ -4,99 +4,120 @@ import (
"crypto/tls"
"fmt"
"net/http"
"net/url"
"os"
"strings"
"github.com/mitchellh/gophercloud-fork-40444fb"
"github.com/mitchellh/packer/common"
"github.com/mitchellh/packer/template/interpolate"
"github.com/rackspace/gophercloud"
"github.com/rackspace/gophercloud/openstack"
)
// AccessConfig is for common configuration related to openstack access
type AccessConfig struct {
Username string `mapstructure:"username"`
Password string `mapstructure:"password"`
ApiKey string `mapstructure:"api_key"`
Project string `mapstructure:"project"`
Provider string `mapstructure:"provider"`
RawRegion string `mapstructure:"region"`
ProxyUrl string `mapstructure:"proxy_url"`
TenantId string `mapstructure:"tenant_id"`
Insecure bool `mapstructure:"insecure"`
}
Username string `mapstructure:"username"`
UserID string `mapstructure:"user_id"`
Password string `mapstructure:"password"`
APIKey string `mapstructure:"api_key"`
IdentityEndpoint string `mapstructure:"identity_endpoint"`
TenantID string `mapstructure:"tenant_id"`
TenantName string `mapstructure:"tenant_name"`
DomainID string `mapstructure:"domain_id"`
DomainName string `mapstructure:"domain_name"`
Insecure bool `mapstructure:"insecure"`
Region string `mapstructure:"region"`
EndpointType string `mapstructure:"endpoint_type"`
// Auth returns a valid Auth object for access to openstack services, or
// an error if the authentication couldn't be resolved.
func (c *AccessConfig) Auth() (gophercloud.AccessProvider, error) {
c.Username = common.ChooseString(c.Username, os.Getenv("SDK_USERNAME"), os.Getenv("OS_USERNAME"))
c.Password = common.ChooseString(c.Password, os.Getenv("SDK_PASSWORD"), os.Getenv("OS_PASSWORD"))
c.ApiKey = common.ChooseString(c.ApiKey, os.Getenv("SDK_API_KEY"))
c.Project = common.ChooseString(c.Project, os.Getenv("SDK_PROJECT"), os.Getenv("OS_TENANT_NAME"))
c.Provider = common.ChooseString(c.Provider, os.Getenv("SDK_PROVIDER"), os.Getenv("OS_AUTH_URL"))
c.RawRegion = common.ChooseString(c.RawRegion, os.Getenv("SDK_REGION"), os.Getenv("OS_REGION_NAME"))
c.TenantId = common.ChooseString(c.TenantId, os.Getenv("OS_TENANT_ID"))
// OpenStack's auto-generated openrc.sh files do not append the suffix
// /tokens to the authentication URL. This ensures it is present when
// specifying the URL.
if strings.Contains(c.Provider, "://") && !strings.HasSuffix(c.Provider, "/tokens") {
c.Provider += "/tokens"
}
authoptions := gophercloud.AuthOptions{
AllowReauth: true,
ApiKey: c.ApiKey,
TenantId: c.TenantId,
TenantName: c.Project,
Username: c.Username,
Password: c.Password,
}
default_transport := &http.Transport{}
if c.Insecure {
cfg := new(tls.Config)
cfg.InsecureSkipVerify = true
default_transport.TLSClientConfig = cfg
}
// For corporate networks it may be the case where we want our API calls
// to be sent through a separate HTTP proxy than external traffic.
if c.ProxyUrl != "" {
url, err := url.Parse(c.ProxyUrl)
if err != nil {
return nil, err
}
// The gophercloud.Context has a UseCustomClient method which
// would allow us to override with a new instance of http.Client.
default_transport.Proxy = http.ProxyURL(url)
}
if c.Insecure || c.ProxyUrl != "" {
http.DefaultTransport = default_transport
}
return gophercloud.Authenticate(c.Provider, authoptions)
}
func (c *AccessConfig) Region() string {
return common.ChooseString(c.RawRegion, os.Getenv("SDK_REGION"), os.Getenv("OS_REGION_NAME"))
osClient *gophercloud.ProviderClient
}
func (c *AccessConfig) Prepare(ctx *interpolate.Context) []error {
errs := make([]error, 0)
if strings.HasPrefix(c.Provider, "rackspace") {
if c.Region() == "" {
errs = append(errs, fmt.Errorf("region must be specified when using rackspace"))
if c.EndpointType != "internal" && c.EndpointType != "internalURL" &&
c.EndpointType != "admin" && c.EndpointType != "adminURL" &&
c.EndpointType != "public" && c.EndpointType != "publicURL" &&
c.EndpointType != "" {
return []error{fmt.Errorf("Invalid endpoint type provided")}
}
if c.Region == "" {
c.Region = os.Getenv("OS_REGION_NAME")
}
// Legacy RackSpace stuff. We're keeping this around to keep things BC.
if c.APIKey == "" {
c.APIKey = os.Getenv("SDK_API_KEY")
}
if c.Password == "" {
c.Password = os.Getenv("SDK_PASSWORD")
}
if c.Region == "" {
c.Region = os.Getenv("SDK_REGION")
}
if c.TenantName == "" {
c.TenantName = os.Getenv("SDK_PROJECT")
}
if c.Username == "" {
c.Username = os.Getenv("SDK_USERNAME")
}
// Get as much as possible from the end
ao, _ := openstack.AuthOptionsFromEnv()
// Override values if we have them in our config
overrides := []struct {
From, To *string
}{
{&c.Username, &ao.Username},
{&c.UserID, &ao.UserID},
{&c.Password, &ao.Password},
{&c.APIKey, &ao.APIKey},
{&c.IdentityEndpoint, &ao.IdentityEndpoint},
{&c.TenantID, &ao.TenantID},
{&c.TenantName, &ao.TenantName},
{&c.DomainID, &ao.DomainID},
{&c.DomainName, &ao.DomainName},
}
for _, s := range overrides {
if *s.From != "" {
*s.To = *s.From
}
}
if len(errs) > 0 {
return errs
// Build the client itself
client, err := openstack.NewClient(ao.IdentityEndpoint)
if err != nil {
return []error{err}
}
// If we have insecure set, then create a custom HTTP client that
// ignores SSL errors.
if c.Insecure {
config := &tls.Config{InsecureSkipVerify: true}
transport := &http.Transport{TLSClientConfig: config}
client.HTTPClient.Transport = transport
}
// Auth
err = openstack.Authenticate(client, ao)
if err != nil {
return []error{err}
}
c.osClient = client
return nil
}
func (c *AccessConfig) computeV2Client() (*gophercloud.ServiceClient, error) {
return openstack.NewComputeV2(c.osClient, gophercloud.EndpointOpts{
Region: c.Region,
Availability: c.getEndpointType(),
})
}
func (c *AccessConfig) getEndpointType() gophercloud.Availability {
if c.EndpointType == "internal" || c.EndpointType == "internalURL" {
return gophercloud.AvailabilityInternal
}
if c.EndpointType == "admin" || c.EndpointType == "adminURL" {
return gophercloud.AvailabilityAdmin
}
return gophercloud.AvailabilityPublic
}

View File

@ -1,77 +0,0 @@
package openstack
import (
"os"
"testing"
)
func init() {
// Clear out the openstack env vars so they don't
// affect our tests.
os.Setenv("SDK_REGION", "")
os.Setenv("OS_REGION_NAME", "")
}
func testAccessConfig() *AccessConfig {
return &AccessConfig{}
}
func TestAccessConfigPrepare_NoRegion_Rackspace(t *testing.T) {
c := testAccessConfig()
c.Provider = "rackspace-us"
if err := c.Prepare(nil); err == nil {
t.Fatalf("shouldn't have err: %s", err)
}
}
func TestAccessConfigRegionWithEmptyEnv(t *testing.T) {
c := testAccessConfig()
c.Prepare(nil)
if c.Region() != "" {
t.Fatalf("Region should be empty")
}
}
func TestAccessConfigRegionWithSdkRegionEnv(t *testing.T) {
c := testAccessConfig()
c.Prepare(nil)
expectedRegion := "sdk_region"
os.Setenv("SDK_REGION", expectedRegion)
os.Setenv("OS_REGION_NAME", "")
if c.Region() != expectedRegion {
t.Fatalf("Region should be: %s", expectedRegion)
}
}
func TestAccessConfigRegionWithOsRegionNameEnv(t *testing.T) {
c := testAccessConfig()
c.Prepare(nil)
expectedRegion := "os_region_name"
os.Setenv("SDK_REGION", "")
os.Setenv("OS_REGION_NAME", expectedRegion)
if c.Region() != expectedRegion {
t.Fatalf("Region should be: %s", expectedRegion)
}
}
func TestAccessConfigPrepare_NoRegion_PrivateCloud(t *testing.T) {
c := testAccessConfig()
c.Provider = "http://some-keystone-server:5000/v2.0"
if err := c.Prepare(nil); err != nil {
t.Fatalf("shouldn't have err: %s", err)
}
}
func TestAccessConfigPrepare_Region(t *testing.T) {
dfw := "DFW"
c := testAccessConfig()
c.RawRegion = dfw
if err := c.Prepare(nil); err != nil {
t.Fatalf("shouldn't have err: %s", err)
}
if dfw != c.Region() {
t.Fatalf("Regions do not match: %s %s", dfw, c.Region())
}
}

View File

@ -4,7 +4,8 @@ import (
"fmt"
"log"
"github.com/mitchellh/gophercloud-fork-40444fb"
"github.com/rackspace/gophercloud"
"github.com/rackspace/gophercloud/openstack/compute/v2/images"
)
// Artifact is an artifact implementation that contains built images.
@ -16,7 +17,7 @@ type Artifact struct {
BuilderIdValue string
// OpenStack connection for performing API stuff.
Conn gophercloud.CloudServersProvider
Client *gophercloud.ServiceClient
}
func (a *Artifact) BuilderId() string {
@ -42,5 +43,5 @@ func (a *Artifact) State(name string) interface{} {
func (a *Artifact) Destroy() error {
log.Printf("Destroying image: %s", a.ImageId)
return a.Conn.DeleteImageById(a.ImageId)
return images.Delete(a.Client, a.ImageId).ExtractErr()
}

View File

@ -9,7 +9,6 @@ import (
"github.com/mitchellh/packer/common"
"log"
"github.com/mitchellh/gophercloud-fork-40444fb"
"github.com/mitchellh/packer/helper/config"
"github.com/mitchellh/packer/packer"
"github.com/mitchellh/packer/template/interpolate"
@ -55,43 +54,32 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) {
}
func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packer.Artifact, error) {
auth, err := b.config.AccessConfig.Auth()
computeClient, err := b.config.computeV2Client()
if err != nil {
return nil, err
}
//fetches the api requisites from gophercloud for the appropriate
//openstack variant
api, err := gophercloud.PopulateApi(b.config.RunConfig.OpenstackProvider)
if err != nil {
return nil, err
}
api.Region = b.config.AccessConfig.Region()
csp, err := gophercloud.ServersApi(auth, api)
if err != nil {
log.Printf("Region: %s", b.config.AccessConfig.Region())
return nil, err
return nil, fmt.Errorf("Error initializing compute client: %s", err)
}
// Setup the state bag and initial state for the steps
state := new(multistep.BasicStateBag)
state.Put("config", b.config)
state.Put("csp", csp)
state.Put("hook", hook)
state.Put("ui", ui)
// Build the steps
steps := []multistep.Step{
&StepLoadFlavor{
Flavor: b.config.Flavor,
},
&StepKeyPair{
Debug: b.config.PackerDebug,
DebugKeyPath: fmt.Sprintf("os_%s.pem", b.config.PackerBuildName),
},
&StepRunSourceServer{
Name: b.config.ImageName,
Flavor: b.config.Flavor,
SourceImage: b.config.SourceImage,
SecurityGroups: b.config.SecurityGroups,
Networks: b.config.Networks,
Name: b.config.ImageName,
SourceImage: b.config.SourceImage,
SecurityGroups: b.config.SecurityGroups,
Networks: b.config.Networks,
AvailabilityZone: b.config.AvailabilityZone,
},
&StepWaitForRackConnect{
Wait: b.config.RackconnectWait,
@ -101,7 +89,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
FloatingIp: b.config.FloatingIp,
},
&common.StepConnectSSH{
SSHAddress: SSHAddress(csp, b.config.SSHInterface, b.config.SSHPort),
SSHAddress: SSHAddress(computeClient, b.config.SSHInterface, b.config.SSHPort),
SSHConfig: SSHConfig(b.config.SSHUsername),
SSHWaitTimeout: b.config.SSHTimeout(),
},
@ -135,7 +123,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
artifact := &Artifact{
ImageId: state.Get("image").(string),
BuilderIdValue: BuilderId,
Conn: csp,
Client: computeClient,
}
return artifact, nil

View File

@ -9,7 +9,6 @@ func testConfig() map[string]interface{} {
return map[string]interface{}{
"username": "foo",
"password": "bar",
"provider": "foo",
"region": "DFW",
"image_name": "foo",
"source_image": "foo",
@ -40,55 +39,3 @@ func TestBuilder_Prepare_BadType(t *testing.T) {
t.Fatalf("prepare should fail")
}
}
func TestBuilderPrepare_ImageName(t *testing.T) {
var b Builder
config := testConfig()
// Test good
config["image_name"] = "foo"
warns, err := b.Prepare(config)
if len(warns) > 0 {
t.Fatalf("bad: %#v", warns)
}
if err != nil {
t.Fatalf("should not have error: %s", err)
}
// Test bad
config["image_name"] = "foo {{"
b = Builder{}
warns, err = b.Prepare(config)
if len(warns) > 0 {
t.Fatalf("bad: %#v", warns)
}
if err == nil {
t.Fatal("should have error")
}
// Test bad
delete(config, "image_name")
b = Builder{}
warns, err = b.Prepare(config)
if len(warns) > 0 {
t.Fatalf("bad: %#v", warns)
}
if err == nil {
t.Fatal("should have error")
}
}
func TestBuilderPrepare_InvalidKey(t *testing.T) {
var b Builder
config := testConfig()
// Add a random key
config["i_should_not_be_valid"] = true
warns, err := b.Prepare(config)
if len(warns) > 0 {
t.Fatalf("bad: %#v", warns)
}
if err == nil {
t.Fatal("should have error")
}
}

View File

@ -11,19 +11,22 @@ import (
// RunConfig contains configuration for running an instance from a source
// image and details on how to access that launched image.
type RunConfig struct {
SourceImage string `mapstructure:"source_image"`
Flavor string `mapstructure:"flavor"`
RawSSHTimeout string `mapstructure:"ssh_timeout"`
SSHUsername string `mapstructure:"ssh_username"`
SSHPort int `mapstructure:"ssh_port"`
SSHInterface string `mapstructure:"ssh_interface"`
OpenstackProvider string `mapstructure:"openstack_provider"`
UseFloatingIp bool `mapstructure:"use_floating_ip"`
RackconnectWait bool `mapstructure:"rackconnect_wait"`
FloatingIpPool string `mapstructure:"floating_ip_pool"`
FloatingIp string `mapstructure:"floating_ip"`
SecurityGroups []string `mapstructure:"security_groups"`
Networks []string `mapstructure:"networks"`
SourceImage string `mapstructure:"source_image"`
Flavor string `mapstructure:"flavor"`
RawSSHTimeout string `mapstructure:"ssh_timeout"`
SSHUsername string `mapstructure:"ssh_username"`
SSHPort int `mapstructure:"ssh_port"`
SSHInterface string `mapstructure:"ssh_interface"`
AvailabilityZone string `mapstructure:"availability_zone"`
RackconnectWait bool `mapstructure:"rackconnect_wait"`
FloatingIpPool string `mapstructure:"floating_ip_pool"`
FloatingIp string `mapstructure:"floating_ip"`
SecurityGroups []string `mapstructure:"security_groups"`
Networks []string `mapstructure:"networks"`
// Not really used, but here for BC
OpenstackProvider string `mapstructure:"openstack_provider"`
UseFloatingIp bool `mapstructure:"use_floating_ip"`
// Unexported fields that are calculated from others
sshTimeout time.Duration

View File

@ -3,12 +3,12 @@ package openstack
import (
"errors"
"fmt"
"github.com/mitchellh/multistep"
"github.com/racker/perigee"
"log"
"time"
"github.com/mitchellh/gophercloud-fork-40444fb"
"github.com/mitchellh/multistep"
"github.com/rackspace/gophercloud"
"github.com/rackspace/gophercloud/openstack/compute/v2/servers"
)
// StateRefreshFunc is a function type used for StateChangeConf that is
@ -33,21 +33,22 @@ type StateChangeConf struct {
// ServerStateRefreshFunc returns a StateRefreshFunc that is used to watch
// an openstack server.
func ServerStateRefreshFunc(csp gophercloud.CloudServersProvider, s *gophercloud.Server) StateRefreshFunc {
func ServerStateRefreshFunc(
client *gophercloud.ServiceClient, s *servers.Server) StateRefreshFunc {
return func() (interface{}, string, int, error) {
resp, err := csp.ServerById(s.Id)
serverNew, err := servers.Get(client, s.ID).Extract()
if err != nil {
urce, ok := err.(*perigee.UnexpectedResponseCodeError)
if ok && (urce.Actual == 404) {
log.Printf("404 on ServerStateRefresh, returning DELETED")
errCode, ok := err.(*gophercloud.UnexpectedResponseCodeError)
if ok && errCode.Actual == 404 {
log.Printf("[INFO] 404 on ServerStateRefresh, returning DELETED")
return nil, "DELETED", 0, nil
} else {
log.Printf("Error on ServerStateRefresh: %s", err)
log.Printf("[ERROR] Error on ServerStateRefresh: %s", err)
return nil, "", 0, err
}
}
return resp, resp.Status, resp.Progress, nil
return serverNew, serverNew.Status, serverNew.Progress, nil
}
}

View File

@ -3,49 +3,67 @@ package openstack
import (
"errors"
"fmt"
"github.com/mitchellh/multistep"
"golang.org/x/crypto/ssh"
"log"
"time"
"github.com/mitchellh/gophercloud-fork-40444fb"
"github.com/mitchellh/multistep"
"github.com/rackspace/gophercloud"
"github.com/rackspace/gophercloud/openstack/compute/v2/extensions/floatingip"
"github.com/rackspace/gophercloud/openstack/compute/v2/servers"
"golang.org/x/crypto/ssh"
)
// SSHAddress returns a function that can be given to the SSH communicator
// for determining the SSH address based on the server AccessIPv4 setting..
func SSHAddress(csp gophercloud.CloudServersProvider, sshinterface string, port int) func(multistep.StateBag) (string, error) {
func SSHAddress(
client *gophercloud.ServiceClient,
sshinterface string, port int) func(multistep.StateBag) (string, error) {
return func(state multistep.StateBag) (string, error) {
s := state.Get("server").(*gophercloud.Server)
s := state.Get("server").(*servers.Server)
if ip := state.Get("access_ip").(gophercloud.FloatingIp); ip.Ip != "" {
return fmt.Sprintf("%s:%d", ip.Ip, port), nil
// If we have a floating IP, use that
ip := state.Get("access_ip").(*floatingip.FloatingIP)
if ip != nil && ip.IP != "" {
return fmt.Sprintf("%s:%d", ip.IP, port), nil
}
ip_pools, err := s.AllAddressPools()
if err != nil {
return "", errors.New("Error parsing SSH addresses")
if s.AccessIPv4 != "" {
return fmt.Sprintf("%s:%d", s.AccessIPv4, port), nil
}
for pool, addresses := range ip_pools {
if sshinterface != "" {
if pool != sshinterface {
continue
}
// Get all the addresses associated with this server. This
// was taken directly from Terraform.
for _, networkAddresses := range s.Addresses {
elements, ok := networkAddresses.([]interface{})
if !ok {
log.Printf(
"[ERROR] Unknown return type for address field: %#v",
networkAddresses)
continue
}
if pool != "" {
for _, address := range addresses {
if address.Addr != "" && address.Version == 4 {
return fmt.Sprintf("%s:%d", address.Addr, port), nil
for _, element := range elements {
var addr string
address := element.(map[string]interface{})
if address["OS-EXT-IPS:type"] == "floating" {
addr = address["addr"].(string)
} else {
if address["version"].(float64) == 4 {
addr = address["addr"].(string)
}
}
if addr != "" {
return fmt.Sprintf("%s:%d", addr, port), nil
}
}
}
serverState, err := csp.ServerById(s.Id)
s, err := servers.Get(client, s.ID).Extract()
if err != nil {
return "", err
}
state.Put("server", serverState)
state.Put("server", s)
time.Sleep(1 * time.Second)
return "", errors.New("couldn't determine IP address for server")

View File

@ -2,10 +2,11 @@ package openstack
import (
"fmt"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"github.com/mitchellh/gophercloud-fork-40444fb"
"github.com/rackspace/gophercloud/openstack/compute/v2/extensions/floatingip"
"github.com/rackspace/gophercloud/openstack/compute/v2/servers"
)
type StepAllocateIp struct {
@ -15,53 +16,83 @@ type StepAllocateIp struct {
func (s *StepAllocateIp) Run(state multistep.StateBag) multistep.StepAction {
ui := state.Get("ui").(packer.Ui)
csp := state.Get("csp").(gophercloud.CloudServersProvider)
server := state.Get("server").(*gophercloud.Server)
config := state.Get("config").(Config)
server := state.Get("server").(*servers.Server)
// We need the v2 compute client
client, err := config.computeV2Client()
if err != nil {
err = fmt.Errorf("Error initializing compute client: %s", err)
state.Put("error", err)
return multistep.ActionHalt
}
var instanceIp floatingip.FloatingIP
var instanceIp gophercloud.FloatingIp
// This is here in case we error out before putting instanceIp into the
// statebag below, because it is requested by Cleanup()
state.Put("access_ip", instanceIp)
state.Put("access_ip", &instanceIp)
if s.FloatingIp != "" {
instanceIp.Ip = s.FloatingIp
instanceIp.IP = s.FloatingIp
} else if s.FloatingIpPool != "" {
newIp, err := csp.CreateFloatingIp(s.FloatingIpPool)
ui.Say(fmt.Sprintf("Creating floating IP..."))
ui.Message(fmt.Sprintf("Pool: %s", s.FloatingIpPool))
newIp, err := floatingip.Create(client, floatingip.CreateOpts{
Pool: s.FloatingIpPool,
}).Extract()
if err != nil {
err := fmt.Errorf("Error creating floating ip from pool '%s'", s.FloatingIpPool)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
instanceIp = newIp
ui.Say(fmt.Sprintf("Created temporary floating IP %s...", instanceIp.Ip))
instanceIp = *newIp
ui.Message(fmt.Sprintf("Created floating IP: %s", instanceIp.IP))
}
if instanceIp.Ip != "" {
if err := csp.AssociateFloatingIp(server.Id, instanceIp); err != nil {
err := fmt.Errorf("Error associating floating IP %s with instance.", instanceIp.Ip)
if instanceIp.IP != "" {
ui.Say(fmt.Sprintf("Associating floating IP with server..."))
ui.Message(fmt.Sprintf("IP: %s", instanceIp.IP))
err := floatingip.Associate(client, server.ID, instanceIp.IP).ExtractErr()
if err != nil {
err := fmt.Errorf(
"Error associating floating IP %s with instance: %s",
instanceIp.IP, err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
} else {
ui.Say(fmt.Sprintf("Added floating IP %s to instance...", instanceIp.Ip))
}
ui.Message(fmt.Sprintf(
"Added floating IP %s to instance!", instanceIp.IP))
}
state.Put("access_ip", instanceIp)
state.Put("access_ip", &instanceIp)
return multistep.ActionContinue
}
func (s *StepAllocateIp) Cleanup(state multistep.StateBag) {
config := state.Get("config").(Config)
ui := state.Get("ui").(packer.Ui)
csp := state.Get("csp").(gophercloud.CloudServersProvider)
instanceIp := state.Get("access_ip").(gophercloud.FloatingIp)
if s.FloatingIpPool != "" && instanceIp.Id != 0 {
if err := csp.DeleteFloatingIp(instanceIp); err != nil {
ui.Error(fmt.Sprintf("Error deleting temporary floating IP %s", instanceIp.Ip))
instanceIp := state.Get("access_ip").(*floatingip.FloatingIP)
// We need the v2 compute client
client, err := config.computeV2Client()
if err != nil {
ui.Error(fmt.Sprintf(
"Error deleting temporary floating IP %s", instanceIp.IP))
return
}
if s.FloatingIpPool != "" && instanceIp.ID != "" {
if err := floatingip.Delete(client, instanceIp.ID).ExtractErr(); err != nil {
ui.Error(fmt.Sprintf(
"Error deleting temporary floating IP %s", instanceIp.IP))
return
}
ui.Say(fmt.Sprintf("Deleted temporary floating IP %s", instanceIp.Ip))
ui.Say(fmt.Sprintf("Deleted temporary floating IP %s", instanceIp.IP))
}
}

View File

@ -2,28 +2,36 @@ package openstack
import (
"fmt"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"log"
"time"
"github.com/mitchellh/gophercloud-fork-40444fb"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"github.com/rackspace/gophercloud"
"github.com/rackspace/gophercloud/openstack/compute/v2/images"
"github.com/rackspace/gophercloud/openstack/compute/v2/servers"
)
type stepCreateImage struct{}
func (s *stepCreateImage) Run(state multistep.StateBag) multistep.StepAction {
csp := state.Get("csp").(gophercloud.CloudServersProvider)
config := state.Get("config").(Config)
server := state.Get("server").(*gophercloud.Server)
server := state.Get("server").(*servers.Server)
ui := state.Get("ui").(packer.Ui)
// We need the v2 compute client
client, err := config.computeV2Client()
if err != nil {
err = fmt.Errorf("Error initializing compute client: %s", err)
state.Put("error", err)
return multistep.ActionHalt
}
// Create the image
ui.Say(fmt.Sprintf("Creating the image: %s", config.ImageName))
createOpts := gophercloud.CreateImage{
imageId, err := servers.CreateImage(client, server.ID, servers.CreateImageOpts{
Name: config.ImageName,
}
imageId, err := csp.CreateImage(server.Id, createOpts)
}).ExtractImageID()
if err != nil {
err := fmt.Errorf("Error creating image: %s", err)
state.Put("error", err)
@ -32,12 +40,12 @@ func (s *stepCreateImage) Run(state multistep.StateBag) multistep.StepAction {
}
// Set the Image ID in the state
ui.Say(fmt.Sprintf("Image: %s", imageId))
ui.Message(fmt.Sprintf("Image: %s", imageId))
state.Put("image", imageId)
// Wait for the image to become ready
ui.Say("Waiting for image to become ready...")
if err := WaitForImage(csp, imageId); err != nil {
if err := WaitForImage(client, imageId); err != nil {
err := fmt.Errorf("Error waiting for image: %s", err)
state.Put("error", err)
ui.Error(err.Error())
@ -52,10 +60,17 @@ func (s *stepCreateImage) Cleanup(multistep.StateBag) {
}
// WaitForImage waits for the given Image ID to become ready.
func WaitForImage(csp gophercloud.CloudServersProvider, imageId string) error {
func WaitForImage(client *gophercloud.ServiceClient, imageId string) error {
for {
image, err := csp.ImageById(imageId)
image, err := images.Get(client, imageId).Extract()
if err != nil {
errCode, ok := err.(*gophercloud.UnexpectedResponseCodeError)
if ok && errCode.Actual == 500 {
log.Printf("[ERROR] 500 error received, will ignore and retry: %s", err)
time.Sleep(2 * time.Second)
continue
}
return err
}

View File

@ -2,14 +2,13 @@ package openstack
import (
"fmt"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/common/uuid"
"github.com/mitchellh/packer/packer"
"log"
"os"
"runtime"
"github.com/mitchellh/gophercloud-fork-40444fb"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/common/uuid"
"github.com/mitchellh/packer/packer"
"github.com/rackspace/gophercloud/openstack/compute/v2/extensions/keypairs"
)
type StepKeyPair struct {
@ -19,18 +18,28 @@ type StepKeyPair struct {
}
func (s *StepKeyPair) Run(state multistep.StateBag) multistep.StepAction {
csp := state.Get("csp").(gophercloud.CloudServersProvider)
config := state.Get("config").(Config)
ui := state.Get("ui").(packer.Ui)
// We need the v2 compute client
computeClient, err := config.computeV2Client()
if err != nil {
err = fmt.Errorf("Error initializing compute client: %s", err)
state.Put("error", err)
return multistep.ActionHalt
}
ui.Say("Creating temporary keypair for this instance...")
keyName := fmt.Sprintf("packer %s", uuid.TimeOrderedUUID())
log.Printf("temporary keypair name: %s", keyName)
keyResp, err := csp.CreateKeyPair(gophercloud.NewKeyPair{Name: keyName})
keypair, err := keypairs.Create(computeClient, keypairs.CreateOpts{
Name: keyName,
}).Extract()
if err != nil {
state.Put("error", fmt.Errorf("Error creating temporary keypair: %s", err))
return multistep.ActionHalt
}
if keyResp.PrivateKey == "" {
if keypair.PrivateKey == "" {
state.Put("error", fmt.Errorf("The temporary keypair returned was blank"))
return multistep.ActionHalt
}
@ -47,7 +56,7 @@ func (s *StepKeyPair) Run(state multistep.StateBag) multistep.StepAction {
defer f.Close()
// Write the key out
if _, err := f.Write([]byte(keyResp.PrivateKey)); err != nil {
if _, err := f.Write([]byte(keypair.PrivateKey)); err != nil {
state.Put("error", fmt.Errorf("Error saving debug key: %s", err))
return multistep.ActionHalt
}
@ -66,7 +75,7 @@ func (s *StepKeyPair) Run(state multistep.StateBag) multistep.StepAction {
// Set some state data for use in future steps
state.Put("keyPair", keyName)
state.Put("privateKey", keyResp.PrivateKey)
state.Put("privateKey", keypair.PrivateKey)
return multistep.ActionContinue
}
@ -77,11 +86,19 @@ func (s *StepKeyPair) Cleanup(state multistep.StateBag) {
return
}
csp := state.Get("csp").(gophercloud.CloudServersProvider)
config := state.Get("config").(Config)
ui := state.Get("ui").(packer.Ui)
// We need the v2 compute client
computeClient, err := config.computeV2Client()
if err != nil {
ui.Error(fmt.Sprintf(
"Error cleaning up keypair. Please delete the key manually: %s", s.keyName))
return
}
ui.Say("Deleting temporary keypair...")
err := csp.DeleteKeyPair(s.keyName)
err = keypairs.Delete(computeClient, s.keyName).ExtractErr()
if err != nil {
ui.Error(fmt.Sprintf(
"Error cleaning up keypair. Please delete the key manually: %s", s.keyName))

View File

@ -0,0 +1,61 @@
package openstack
import (
"fmt"
"log"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"github.com/rackspace/gophercloud/openstack/compute/v2/flavors"
)
// StepLoadFlavor gets the FlavorRef from a Flavor. It first assumes
// that the Flavor is a ref and verifies it. Otherwise, it tries to find
// the flavor by name.
type StepLoadFlavor struct {
Flavor string
}
func (s *StepLoadFlavor) Run(state multistep.StateBag) multistep.StepAction {
config := state.Get("config").(Config)
ui := state.Get("ui").(packer.Ui)
// We need the v2 compute client
client, err := config.computeV2Client()
if err != nil {
err = fmt.Errorf("Error initializing compute client: %s", err)
state.Put("error", err)
return multistep.ActionHalt
}
ui.Say(fmt.Sprintf("Loading flavor: %s", s.Flavor))
log.Printf("[INFO] Loading flavor by ID: %s", s.Flavor)
flavor, err := flavors.Get(client, s.Flavor).Extract()
if err != nil {
log.Printf("[ERROR] Failed to find flavor by ID: %s", err)
geterr := err
log.Printf("[INFO] Loading flavor by name: %s", s.Flavor)
id, err := flavors.IDFromName(client, s.Flavor)
if err != nil {
log.Printf("[ERROR] Failed to find flavor by name: %s", err)
err = fmt.Errorf(
"Unable to find specified flavor by ID or name!\n\n"+
"Error from ID lookup: %s\n\n"+
"Error from name lookup: %s",
geterr,
err)
state.Put("error", err)
return multistep.ActionHalt
}
flavor = &flavors.Flavor{ID: id}
}
ui.Message(fmt.Sprintf("Verified flavor. ID: %s", flavor.ID))
state.Put("flavor_id", flavor.ID)
return multistep.ActionContinue
}
func (s *StepLoadFlavor) Cleanup(state multistep.StateBag) {
}

View File

@ -2,51 +2,56 @@ package openstack
import (
"fmt"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"log"
"github.com/mitchellh/gophercloud-fork-40444fb"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"github.com/rackspace/gophercloud/openstack/compute/v2/extensions/keypairs"
"github.com/rackspace/gophercloud/openstack/compute/v2/servers"
)
type StepRunSourceServer struct {
Flavor string
Name string
SourceImage string
SecurityGroups []string
Networks []string
Name string
SourceImage string
SecurityGroups []string
Networks []string
AvailabilityZone string
server *gophercloud.Server
server *servers.Server
}
func (s *StepRunSourceServer) Run(state multistep.StateBag) multistep.StepAction {
csp := state.Get("csp").(gophercloud.CloudServersProvider)
config := state.Get("config").(Config)
flavor := state.Get("flavor_id").(string)
keyName := state.Get("keyPair").(string)
ui := state.Get("ui").(packer.Ui)
// XXX - validate image and flavor is available
securityGroups := make([]map[string]interface{}, len(s.SecurityGroups))
for i, groupName := range s.SecurityGroups {
securityGroups[i] = make(map[string]interface{})
securityGroups[i]["name"] = groupName
// We need the v2 compute client
computeClient, err := config.computeV2Client()
if err != nil {
err = fmt.Errorf("Error initializing compute client: %s", err)
state.Put("error", err)
return multistep.ActionHalt
}
networks := make([]gophercloud.NetworkConfig, len(s.Networks))
networks := make([]servers.Network, len(s.Networks))
for i, networkUuid := range s.Networks {
networks[i].Uuid = networkUuid
networks[i].UUID = networkUuid
}
server := gophercloud.NewServer{
Name: s.Name,
ImageRef: s.SourceImage,
FlavorRef: s.Flavor,
KeyPairName: keyName,
SecurityGroup: securityGroups,
Networks: networks,
}
ui.Say("Launching server...")
s.server, err = servers.Create(computeClient, keypairs.CreateOptsExt{
CreateOptsBuilder: servers.CreateOpts{
Name: s.Name,
ImageRef: s.SourceImage,
FlavorRef: flavor,
SecurityGroups: s.SecurityGroups,
Networks: networks,
AvailabilityZone: s.AvailabilityZone,
},
serverResp, err := csp.CreateServer(server)
KeyName: keyName,
}).Extract()
if err != nil {
err := fmt.Errorf("Error launching source server: %s", err)
state.Put("error", err)
@ -54,25 +59,25 @@ func (s *StepRunSourceServer) Run(state multistep.StateBag) multistep.StepAction
return multistep.ActionHalt
}
s.server, err = csp.ServerById(serverResp.Id)
log.Printf("server id: %s", s.server.Id)
ui.Message(fmt.Sprintf("Server ID: %s", s.server.ID))
log.Printf("server id: %s", s.server.ID)
ui.Say(fmt.Sprintf("Waiting for server (%s) to become ready...", s.server.Id))
ui.Say("Waiting for server to become ready...")
stateChange := StateChangeConf{
Pending: []string{"BUILD"},
Target: "ACTIVE",
Refresh: ServerStateRefreshFunc(csp, s.server),
Refresh: ServerStateRefreshFunc(computeClient, s.server),
StepState: state,
}
latestServer, err := WaitForState(&stateChange)
if err != nil {
err := fmt.Errorf("Error waiting for server (%s) to become ready: %s", s.server.Id, err)
err := fmt.Errorf("Error waiting for server (%s) to become ready: %s", s.server.ID, err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
s.server = latestServer.(*gophercloud.Server)
s.server = latestServer.(*servers.Server)
state.Put("server", s.server)
return multistep.ActionContinue
@ -83,18 +88,25 @@ func (s *StepRunSourceServer) Cleanup(state multistep.StateBag) {
return
}
csp := state.Get("csp").(gophercloud.CloudServersProvider)
config := state.Get("config").(Config)
ui := state.Get("ui").(packer.Ui)
// We need the v2 compute client
computeClient, err := config.computeV2Client()
if err != nil {
ui.Error(fmt.Sprintf("Error terminating server, may still be around: %s", err))
return
}
ui.Say("Terminating the source server...")
if err := csp.DeleteServerById(s.server.Id); err != nil {
if err := servers.Delete(computeClient, s.server.ID).ExtractErr(); err != nil {
ui.Error(fmt.Sprintf("Error terminating server, may still be around: %s", err))
return
}
stateChange := StateChangeConf{
Pending: []string{"ACTIVE", "BUILD", "REBUILD", "SUSPENDED"},
Refresh: ServerStateRefreshFunc(csp, s.server),
Refresh: ServerStateRefreshFunc(computeClient, s.server),
Target: "DELETED",
}

View File

@ -2,11 +2,11 @@ package openstack
import (
"fmt"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"time"
"github.com/mitchellh/gophercloud-fork-40444fb"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"github.com/rackspace/gophercloud/openstack/compute/v2/servers"
)
type StepWaitForRackConnect struct {
@ -18,14 +18,22 @@ func (s *StepWaitForRackConnect) Run(state multistep.StateBag) multistep.StepAct
return multistep.ActionContinue
}
csp := state.Get("csp").(gophercloud.CloudServersProvider)
server := state.Get("server").(*gophercloud.Server)
config := state.Get("config").(Config)
server := state.Get("server").(*servers.Server)
ui := state.Get("ui").(packer.Ui)
ui.Say(fmt.Sprintf("Waiting for server (%s) to become RackConnect ready...", server.Id))
// We need the v2 compute client
computeClient, err := config.computeV2Client()
if err != nil {
err = fmt.Errorf("Error initializing compute client: %s", err)
state.Put("error", err)
return multistep.ActionHalt
}
ui.Say(fmt.Sprintf(
"Waiting for server (%s) to become RackConnect ready...", server.ID))
for {
server, err := csp.ServerById(server.Id)
server, err = servers.Get(computeClient, server.ID).Extract()
if err != nil {
return multistep.ActionHalt
}

View File

@ -73,6 +73,12 @@ func NewDriver() (Driver, error) {
log.Printf("prlctl path: %s", prlctlPath)
drivers = map[string]Driver{
"11": &Parallels10Driver{
Parallels9Driver: Parallels9Driver{
PrlctlPath: prlctlPath,
dhcp_lease_file: dhcp_lease_file,
},
},
"10": &Parallels10Driver{
Parallels9Driver: Parallels9Driver{
PrlctlPath: prlctlPath,

View File

@ -1,6 +1,7 @@
package common
// Parallels10Driver are inherited from Parallels9Driver.
// Used for Parallels v 10 & 11
type Parallels10Driver struct {
Parallels9Driver
}

View File

@ -33,7 +33,7 @@ type Config struct {
func NewConfig(raws ...interface{}) (*Config, []string, error) {
c := new(Config)
err := config.Decode(&c, &config.DecodeOpts{
err := config.Decode(c, &config.DecodeOpts{
Interpolate: true,
InterpolateFilter: &interpolate.RenderFilter{
Exclude: []string{

View File

@ -18,6 +18,7 @@ type SSHConfig struct {
SSHPort uint `mapstructure:"ssh_port"`
SSHUser string `mapstructure:"ssh_username"`
RawSSHWaitTimeout string `mapstructure:"ssh_wait_timeout"`
SSHSkipNatMapping bool `mapstructure:"ssh_skip_nat_mapping"`
SSHWaitTimeout time.Duration
}

View File

@ -17,9 +17,10 @@ import (
// Produces:
// exportPath string - The path to the resulting export.
type StepExport struct {
Format string
OutputDir string
ExportOpts []string
Format string
OutputDir string
ExportOpts []string
SkipNatMapping bool
}
func (s *StepExport) Run(state multistep.StateBag) multistep.StepAction {
@ -30,30 +31,31 @@ func (s *StepExport) Run(state multistep.StateBag) multistep.StepAction {
// Wait a second to ensure VM is really shutdown.
log.Println("1 second timeout to ensure VM is really shutdown")
time.Sleep(1 * time.Second)
ui.Say("Preparing to export machine...")
// Clear out the Packer-created forwarding rule
ui.Say("Preparing to export machine...")
ui.Message(fmt.Sprintf(
"Deleting forwarded port mapping for SSH (host port %d)",
state.Get("sshHostPort")))
command := []string{"modifyvm", vmName, "--natpf1", "delete", "packerssh"}
if err := driver.VBoxManage(command...); err != nil {
err := fmt.Errorf("Error deleting port forwarding rule: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
if !s.SkipNatMapping {
ui.Message(fmt.Sprintf(
"Deleting forwarded port mapping for SSH (host port %d)",
state.Get("sshHostPort")))
command := []string{"modifyvm", vmName, "--natpf1", "delete", "packerssh"}
if err := driver.VBoxManage(command...); err != nil {
err := fmt.Errorf("Error deleting port forwarding rule: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
}
// Export the VM to an OVF
outputPath := filepath.Join(s.OutputDir, vmName+"."+s.Format)
command = []string{
command := []string{
"export",
vmName,
"--output",
outputPath,
}
command = append(command, s.ExportOpts...)
ui.Say("Exporting virtual machine...")

View File

@ -19,9 +19,10 @@ import (
//
// Produces:
type StepForwardSSH struct {
GuestPort uint
HostPortMin uint
HostPortMax uint
GuestPort uint
HostPortMin uint
HostPortMax uint
SkipNatMapping bool
}
func (s *StepForwardSSH) Run(state multistep.StateBag) multistep.StepAction {
@ -29,39 +30,41 @@ func (s *StepForwardSSH) Run(state multistep.StateBag) multistep.StepAction {
ui := state.Get("ui").(packer.Ui)
vmName := state.Get("vmName").(string)
log.Printf("Looking for available SSH port between %d and %d",
s.HostPortMin, s.HostPortMax)
var sshHostPort uint
var offset uint = 0
sshHostPort := s.GuestPort
if !s.SkipNatMapping {
log.Printf("Looking for available SSH port between %d and %d",
s.HostPortMin, s.HostPortMax)
var offset uint = 0
portRange := int(s.HostPortMax - s.HostPortMin)
if portRange > 0 {
// Have to check if > 0 to avoid a panic
offset = uint(rand.Intn(portRange))
}
for {
sshHostPort = offset + s.HostPortMin
log.Printf("Trying port: %d", sshHostPort)
l, err := net.Listen("tcp", fmt.Sprintf("127.0.0.1:%d", sshHostPort))
if err == nil {
defer l.Close()
break
portRange := int(s.HostPortMax - s.HostPortMin)
if portRange > 0 {
// Have to check if > 0 to avoid a panic
offset = uint(rand.Intn(portRange))
}
}
// Create a forwarded port mapping to the VM
ui.Say(fmt.Sprintf("Creating forwarded port mapping for SSH (host port %d)", sshHostPort))
command := []string{
"modifyvm", vmName,
"--natpf1",
fmt.Sprintf("packerssh,tcp,127.0.0.1,%d,,%d", sshHostPort, s.GuestPort),
}
if err := driver.VBoxManage(command...); err != nil {
err := fmt.Errorf("Error creating port forwarding rule: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
for {
sshHostPort = offset + s.HostPortMin
log.Printf("Trying port: %d", sshHostPort)
l, err := net.Listen("tcp", fmt.Sprintf("127.0.0.1:%d", sshHostPort))
if err == nil {
defer l.Close()
break
}
}
// Create a forwarded port mapping to the VM
ui.Say(fmt.Sprintf("Creating forwarded port mapping for SSH (host port %d)", sshHostPort))
command := []string{
"modifyvm", vmName,
"--natpf1",
fmt.Sprintf("packerssh,tcp,127.0.0.1,%d,,%d", sshHostPort, s.GuestPort),
}
if err := driver.VBoxManage(command...); err != nil {
err := fmt.Errorf("Error creating port forwarding rule: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
}
// Save the port we're using so that future steps can use it

View File

@ -230,6 +230,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
Description: "ISO",
ResultKey: "iso_path",
Url: b.config.ISOUrls,
Extension: "iso",
},
&vboxcommon.StepOutputDir{
Force: b.config.PackerForce,
@ -252,9 +253,10 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
},
new(vboxcommon.StepAttachFloppy),
&vboxcommon.StepForwardSSH{
GuestPort: b.config.SSHPort,
HostPortMin: b.config.SSHHostPortMin,
HostPortMax: b.config.SSHHostPortMax,
GuestPort: b.config.SSHPort,
HostPortMin: b.config.SSHHostPortMin,
HostPortMax: b.config.SSHHostPortMax,
SkipNatMapping: b.config.SSHSkipNatMapping,
},
&vboxcommon.StepVBoxManage{
Commands: b.config.VBoxManage,
@ -293,9 +295,10 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
Ctx: b.config.ctx,
},
&vboxcommon.StepExport{
Format: b.config.Format,
OutputDir: b.config.OutputDir,
ExportOpts: b.config.ExportOpts.ExportOpts,
Format: b.config.Format,
OutputDir: b.config.OutputDir,
ExportOpts: b.config.ExportOpts.ExportOpts,
SkipNatMapping: b.config.SSHSkipNatMapping,
},
}

View File

@ -82,9 +82,10 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
},
new(vboxcommon.StepAttachFloppy),
&vboxcommon.StepForwardSSH{
GuestPort: b.config.SSHPort,
HostPortMin: b.config.SSHHostPortMin,
HostPortMax: b.config.SSHHostPortMax,
GuestPort: b.config.SSHPort,
HostPortMin: b.config.SSHHostPortMin,
HostPortMax: b.config.SSHHostPortMax,
SkipNatMapping: b.config.SSHSkipNatMapping,
},
&vboxcommon.StepVBoxManage{
Commands: b.config.VBoxManage,
@ -123,9 +124,10 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
Ctx: b.config.ctx,
},
&vboxcommon.StepExport{
Format: b.config.Format,
OutputDir: b.config.OutputDir,
ExportOpts: b.config.ExportOpts.ExportOpts,
Format: b.config.Format,
OutputDir: b.config.OutputDir,
ExportOpts: b.config.ExportOpts.ExportOpts,
SkipNatMapping: b.config.SSHSkipNatMapping,
},
}

View File

@ -40,8 +40,8 @@ type Config struct {
}
func NewConfig(raws ...interface{}) (*Config, []string, error) {
var c Config
err := config.Decode(&c, &config.DecodeOpts{
c := new(Config)
err := config.Decode(c, &config.DecodeOpts{
Interpolate: true,
InterpolateFilter: &interpolate.RenderFilter{
Exclude: []string{
@ -132,5 +132,5 @@ func NewConfig(raws ...interface{}) (*Config, []string, error) {
c.ImportFlags = append(c.ImportFlags, "--options", c.ImportOpts)
}
return &c, warnings, nil
return c, warnings, nil
}

View File

@ -5,6 +5,7 @@ import (
"fmt"
"log"
"os/exec"
"regexp"
"runtime"
"strconv"
"strings"
@ -135,6 +136,18 @@ func runAndLog(cmd *exec.Cmd) (string, string, error) {
}
err = fmt.Errorf("VMware error: %s", message)
// If "unknown error" is in there, add some additional notes
re := regexp.MustCompile(`(?i)unknown error`)
if re.MatchString(message) {
err = fmt.Errorf(
"%s\n\n%s", err,
"Packer detected a VMware 'Unknown Error'. Unfortunately VMware\n"+
"often has extremely vague error messages such as this and Packer\n"+
"itself can't do much about that. Please check the vmware.log files\n"+
"created by VMware when a VM is started (in the directory of the\n"+
"vmx file), which often contains more detailed error information.")
}
}
log.Printf("stdout: %s", stdoutString)

View File

@ -36,6 +36,18 @@ func (s StepCompactDisk) Run(state multistep.StateBag) multistep.StepAction {
state.Put("error", fmt.Errorf("Error compacting disk: %s", err))
return multistep.ActionHalt
}
if state.Get("additional_disk_paths") != nil {
if moreDisks := state.Get("additional_disk_paths").([]string); len(moreDisks) > 0 {
for i, path := range moreDisks {
ui.Say(fmt.Sprintf("Compacting additional disk image %d",i+1))
if err := driver.CompactDisk(path); err != nil {
state.Put("error", fmt.Errorf("Error compacting additional disk %d: %s", i+1, err))
return multistep.ActionHalt
}
}
}
}
return multistep.ActionContinue
}

View File

@ -35,19 +35,20 @@ type Config struct {
vmwcommon.ToolsConfig `mapstructure:",squash"`
vmwcommon.VMXConfig `mapstructure:",squash"`
DiskName string `mapstructure:"vmdk_name"`
DiskSize uint `mapstructure:"disk_size"`
DiskTypeId string `mapstructure:"disk_type_id"`
FloppyFiles []string `mapstructure:"floppy_files"`
GuestOSType string `mapstructure:"guest_os_type"`
ISOChecksum string `mapstructure:"iso_checksum"`
ISOChecksumType string `mapstructure:"iso_checksum_type"`
ISOUrls []string `mapstructure:"iso_urls"`
Version string `mapstructure:"version"`
VMName string `mapstructure:"vm_name"`
BootCommand []string `mapstructure:"boot_command"`
SkipCompaction bool `mapstructure:"skip_compaction"`
VMXTemplatePath string `mapstructure:"vmx_template_path"`
AdditionalDiskSize []uint `mapstructure:"disk_additional_size"`
DiskName string `mapstructure:"vmdk_name"`
DiskSize uint `mapstructure:"disk_size"`
DiskTypeId string `mapstructure:"disk_type_id"`
FloppyFiles []string `mapstructure:"floppy_files"`
GuestOSType string `mapstructure:"guest_os_type"`
ISOChecksum string `mapstructure:"iso_checksum"`
ISOChecksumType string `mapstructure:"iso_checksum_type"`
ISOUrls []string `mapstructure:"iso_urls"`
Version string `mapstructure:"version"`
VMName string `mapstructure:"vm_name"`
BootCommand []string `mapstructure:"boot_command"`
SkipCompaction bool `mapstructure:"skip_compaction"`
VMXTemplatePath string `mapstructure:"vmx_template_path"`
RemoteType string `mapstructure:"remote_type"`
RemoteDatastore string `mapstructure:"remote_datastore"`

View File

@ -311,8 +311,8 @@ func (d *ESX5Driver) String() string {
}
func (d *ESX5Driver) datastorePath(path string) string {
baseDir := filepath.Base(filepath.Dir(path))
return filepath.ToSlash(filepath.Join("/vmfs/volumes", d.Datastore, baseDir, filepath.Base(path)))
dirPath := filepath.Dir(path)
return filepath.ToSlash(filepath.Join("/vmfs/volumes", d.Datastore, dirPath, filepath.Base(path)))
}
func (d *ESX5Driver) cachePath(path string) string {

View File

@ -35,6 +35,28 @@ func (stepCreateDisk) Run(state multistep.StateBag) multistep.StepAction {
state.Put("full_disk_path", full_disk_path)
if len(config.AdditionalDiskSize) > 0 {
// stash the disk paths we create
additional_paths := make([]string, len(config.AdditionalDiskSize))
ui.Say("Creating additional hard drives...")
for i, additionalsize := range config.AdditionalDiskSize {
additionalpath := filepath.Join(config.OutputDir, fmt.Sprintf("%s-%d.vmdk", config.DiskName, i+1))
size := fmt.Sprintf("%dM", uint64(additionalsize))
if err := driver.CreateDisk(additionalpath, size, config.DiskTypeId); err != nil {
err := fmt.Errorf("Error creating additional disk: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
additional_paths[i] = additionalpath
}
state.Put("additional_disk_paths", additional_paths)
}
return multistep.ActionContinue
}

View File

@ -20,6 +20,11 @@ type vmxTemplateData struct {
Version string
}
type additionalDiskTemplateData struct {
DiskNumber int
DiskName string
}
// This step creates the VMX file for the VM.
//
// Uses:
@ -40,15 +45,6 @@ func (s *stepCreateVMX) Run(state multistep.StateBag) multistep.StepAction {
ui.Say("Building and writing VMX file")
ctx := config.ctx
ctx.Data = &vmxTemplateData{
Name: config.VMName,
GuestOS: config.GuestOSType,
DiskName: config.DiskName,
Version: config.Version,
ISOPath: isoPath,
}
vmxTemplate := DefaultVMXTemplate
if config.VMXTemplatePath != "" {
f, err := os.Open(config.VMXTemplatePath)
@ -71,6 +67,35 @@ func (s *stepCreateVMX) Run(state multistep.StateBag) multistep.StepAction {
vmxTemplate = string(rawBytes)
}
ctx := config.ctx
if len(config.AdditionalDiskSize) > 0 {
for i, _ := range config.AdditionalDiskSize {
ctx.Data = &additionalDiskTemplateData{
DiskNumber: i + 1,
DiskName: config.DiskName,
}
diskTemplate, err := interpolate.Render(DefaultAdditionalDiskTemplate, &ctx)
if err != nil {
err := fmt.Errorf("Error preparing VMX template for additional disk: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
vmxTemplate += diskTemplate
}
}
ctx.Data = &vmxTemplateData{
Name: config.VMName,
GuestOS: config.GuestOSType,
DiskName: config.DiskName,
Version: config.Version,
ISOPath: isoPath,
}
vmxContents, err := interpolate.Render(vmxTemplate, &ctx)
if err != nil {
err := fmt.Errorf("Error procesing VMX template: %s", err)
@ -191,3 +216,9 @@ vmci0.pciSlotNumber = "35"
vmci0.present = "TRUE"
vmotion.checkpointFBSize = "65536000"
`
const DefaultAdditionalDiskTemplate = `
scsi0:{{ .DiskNumber }}.fileName = "{{ .DiskName}}-{{ .DiskNumber }}.vmdk"
scsi0:{{ .DiskNumber }}.present = "TRUE"
scsi0:{{ .DiskNumber }}.redo = ""
`

View File

@ -157,7 +157,7 @@ func (c *PushCommand) Run(args []string) int {
// Build the upload options
var uploadOpts uploadOpts
uploadOpts.Slug = push.Name
uploadOpts.Slug = name
uploadOpts.Builds = make(map[string]*uploadBuildInfo)
for _, b := range tpl.Builders {
info := &uploadBuildInfo{Type: b.Type}
@ -236,7 +236,7 @@ func (c *PushCommand) Run(args []string) int {
return 1
}
c.Ui.Say(fmt.Sprintf("Push successful to '%s'", push.Name))
c.Ui.Say(fmt.Sprintf("Push successful to '%s'", name))
return 0
}

View File

@ -3,13 +3,14 @@ package common
import (
"errors"
"fmt"
"log"
"strings"
"time"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/communicator/ssh"
"github.com/mitchellh/packer/packer"
gossh "golang.org/x/crypto/ssh"
"log"
"strings"
"time"
)
// StepConnectSSH is a multistep Step implementation that waits for SSH
@ -64,6 +65,7 @@ WaitLoop:
case <-waitDone:
if err != nil {
ui.Error(fmt.Sprintf("Error waiting for SSH: %s", err))
state.Put("error", err)
return multistep.ActionHalt
}

View File

@ -1,6 +1,7 @@
package common
import (
"crypto/sha1"
"encoding/hex"
"fmt"
"log"
@ -36,6 +37,12 @@ type StepDownload struct {
// A list of URLs to attempt to download this thing.
Url []string
// Extension is the extension to force for the file that is downloaded.
// Some systems require a certain extension. If this isn't set, the
// extension on the URL is used. Otherwise, this will be forced
// on the downloaded file for every URL.
Extension string
}
func (s *StepDownload) Run(state multistep.StateBag) multistep.StepAction {
@ -60,9 +67,19 @@ func (s *StepDownload) Run(state multistep.StateBag) multistep.StepAction {
targetPath := s.TargetPath
if targetPath == "" {
// Determine a cache key. This is normally just the URL but
// if we force a certain extension we hash the URL and add
// the extension to force it.
cacheKey := url
if s.Extension != "" {
hash := sha1.Sum([]byte(url))
cacheKey = fmt.Sprintf(
"%s.%s", hex.EncodeToString(hash[:]), s.Extension)
}
log.Printf("Acquiring lock to download: %s", url)
targetPath = cache.Lock(url)
defer cache.Unlock(url)
targetPath = cache.Lock(cacheKey)
defer cache.Unlock(cacheKey)
}
config := &DownloadConfig{

View File

@ -6,6 +6,7 @@ import (
"log"
"os/exec"
"path/filepath"
"runtime"
"strings"
"github.com/mitchellh/osext"
@ -172,6 +173,15 @@ func (c *config) discoverSingle(glob string, m *map[string]string) error {
for _, match := range matches {
file := filepath.Base(match)
// One Windows, ignore any plugins that don't end in .exe.
// We could do a full PATHEXT parse, but this is probably good enough.
if runtime.GOOS == "windows" && strings.ToLower(filepath.Ext(file)) != ".exe" {
log.Printf(
"[DEBUG] Ignoring plugin match %s, no exe extension",
match)
continue
}
// If the filename has a ".", trim up to there
if idx := strings.Index(file, "."); idx >= 0 {
file = file[:idx]

View File

@ -94,7 +94,7 @@ func (p *Provisioner) Prepare(raws ...interface{}) error {
}
if p.config.InlineShebang == "" {
p.config.InlineShebang = "/bin/sh"
p.config.InlineShebang = "/bin/sh -e"
}
if p.config.RawStartRetryTimeout == "" {
@ -247,11 +247,11 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error {
}
cmd = &packer.RemoteCmd{
Command: fmt.Sprintf("chmod 0777 %s", p.config.RemotePath),
Command: fmt.Sprintf("chmod 0755 %s", p.config.RemotePath),
}
if err := comm.Start(cmd); err != nil {
return fmt.Errorf(
"Error chmodding script file to 0777 in remote "+
"Error chmodding script file to 0755 in remote "+
"machine: %s", err)
}
cmd.Wait()

View File

@ -45,7 +45,7 @@ func TestProvisionerPrepare_InlineShebang(t *testing.T) {
t.Fatalf("should not have error: %s", err)
}
if p.config.InlineShebang != "/bin/sh" {
if p.config.InlineShebang != "/bin/sh -e" {
t.Fatalf("bad value: %s", p.config.InlineShebang)
}

View File

@ -4,9 +4,9 @@ package main
var GitCommit string
// The main version number that is being run at the moment.
const Version = "0.7.5"
const Version = "0.8.0"
// A pre-release marker for the version. If this is "" (empty string)
// then it means that it is a final release. Otherwise, this is a pre-release
// such as "dev" (in development), "beta", "rc1", etc.
const VersionPrerelease = ""
const VersionPrerelease = "dev"

View File

@ -79,18 +79,18 @@ GEM
celluloid (~> 0.16.0)
rb-fsevent (>= 0.9.3)
rb-inotify (>= 0.9)
middleman (3.3.13)
middleman (3.3.12)
coffee-script (~> 2.2)
compass (>= 1.0.0, < 2.0.0)
compass-import-once (= 1.0.5)
execjs (~> 2.0)
haml (>= 4.0.5)
kramdown (~> 1.2)
middleman-core (= 3.3.13)
middleman-core (= 3.3.12)
middleman-sprockets (>= 3.1.2)
sass (>= 3.4.0, < 4.0)
uglifier (~> 2.5)
middleman-core (3.3.13)
middleman-core (3.3.12)
activesupport (~> 4.1.0)
bundler (~> 1.1)
erubis
@ -175,3 +175,6 @@ PLATFORMS
DEPENDENCIES
middleman-hashicorp!
BUNDLED WITH
1.10.2

View File

@ -144,7 +144,8 @@ each category, the available configuration keys are alphabetized.
or "5m". The default SSH timeout is "5m", or five minutes.
* `subnet_id` (string) - If using VPC, the ID of the subnet, such as
"subnet-12345def", where Packer will launch the EC2 instance.
"subnet-12345def", where Packer will launch the EC2 instance. This field is
required if you are using an non-default VPC.
* `tags` (object of key/value strings) - Tags applied to the AMI.

View File

@ -184,7 +184,8 @@ each category, the available configuration keys are alphabetized.
or "5m". The default SSH timeout is "5m", or five minutes.
* `subnet_id` (string) - If using VPC, the ID of the subnet, such as
"subnet-12345def", where Packer will launch the EC2 instance.
"subnet-12345def", where Packer will launch the EC2 instance. This field is
required if you are using an non-default VPC.
* `tags` (object of key/value strings) - Tags applied to the AMI.

View File

@ -24,62 +24,30 @@ There are many configuration options available for the builder. They are
segmented below into two categories: required and optional parameters. Within
each category, the available configuration keys are alphabetized.
### Required v1 api:
### Required:
* `api_key` (string) - The API key to use to access your account. You can
retrieve this on the "API" page visible after logging into your account
on DigitalOcean.
If not specified, Packer will use the environment variable
`DIGITALOCEAN_API_KEY`, if set.
* `api_token` (string) - The client TOKEN to use to access your account.
It can also be specified via environment variable `DIGITALOCEAN_API_TOKEN`, if set.
* `client_id` (string) - The client ID to use to access your account. You can
find this on the "API" page visible after logging into your account on
DigitalOcean.
If not specified, Packer will use the environment variable
`DIGITALOCEAN_CLIENT_ID`, if set.
* `image` (string) - The name (or slug) of the base image to use. This is the
image that will be used to launch a new droplet and provision it.
See https://developers.digitalocean.com/documentation/v2/#list-all-images for details on how to get a list of the the accepted image names/slugs.
### Required v2 api:
* `region` (string) - The name (or slug) of the region to launch the droplet in.
Consequently, this is the region where the snapshot will be available.
See https://developers.digitalocean.com/documentation/v2/#list-all-regions for the accepted region names/slugs.
* `api_token` (string) - The client TOKEN to use to access your account. If it
specified, then use v2 api (current), if not then used old (v1) deprecated api.
Also it can be specified via environment variable `DIGITALOCEAN_API_TOKEN`, if set.
* `size` (string) - The name (or slug) of the droplet size to use.
See https://developers.digitalocean.com/documentation/v2/#list-all-sizes for the accepted size names/slugs.
### Optional:
* `api_url` (string) - API endpoint, by default use https://api.digitalocean.com
Also it can be specified via environment variable `DIGITALOCEAN_API_URL`, if set.
* `droplet_name` (string) - The name assigned to the droplet. DigitalOcean
sets the hostname of the machine to this value.
* `image` (string) - The name (or slug) of the base image to use. This is the
image that will be used to launch a new droplet and provision it. This
defaults to 'ubuntu-12-04-x64' which is the slug for "Ubuntu 12.04.4 x64".
See https://developers.digitalocean.com/documentation/v2/#list-all-images for details on how to get a list of the the accepted image names/slugs.
* `image_id` (integer) - The ID of the base image to use. This is the image that
will be used to launch a new droplet and provision it.
This setting is deprecated. Use `image` instead.
* `private_networking` (boolean) - Set to `true` to enable private networking
for the droplet being created. This defaults to `false`, or not enabled.
* `region` (string) - The name (or slug) of the region to launch the droplet in.
Consequently, this is the region where the snapshot will be available.
This defaults to "nyc3", which is the slug for "New York 3".
See https://developers.digitalocean.com/documentation/v2/#list-all-regions for the accepted region names/slugs.
* `region_id` (integer) - The ID of the region to launch the droplet in. Consequently,
this is the region where the snapshot will be available.
This setting is deprecated. Use `region` instead.
* `size` (string) - The name (or slug) of the droplet size to use.
This defaults to "512mb", which is the slug for "512MB".
See https://developers.digitalocean.com/documentation/v2/#list-all-sizes for the accepted size names/slugs.
* `size_id` (integer) - The ID of the droplet size to use.
This setting is deprecated. Use `size` instead.
* `snapshot_name` (string) - The name of the resulting snapshot that will
appear in your account. This must be unique.
To help make this unique, use a function like `timestamp` (see
@ -99,6 +67,8 @@ each category, the available configuration keys are alphabetized.
for a droplet to enter a desired state (such as "active") before
timing out. The default state timeout is "6m".
* `user_data` (string) - User data to launch with the Droplet.
## Basic Example
Here is a basic example. It is completely valid as soon as you enter your
@ -107,20 +77,9 @@ own access tokens:
```javascript
{
"type": "digitalocean",
"client_id": "YOUR CLIENT ID",
"api_key": "YOUR API KEY"
"api_token": "YOUR API KEY",
"image": "ubuntu-12-04-x64",
"region": "nyc2",
"size": "512mb"
}
```
## Finding Image, Region, and Size IDs
Unfortunately, finding a list of available values for `image_id`, `region_id`,
and `size_id` is not easy at the moment. Basically, it has to be done through
the [DigitalOcean API](https://www.digitalocean.com/api_access) using the
`/images`, `/regions`, and `/sizes` endpoints. You can use `curl` for this
or request it in your browser.
If you're comfortable installing RubyGems, [Tugboat](https://github.com/pearkes/tugboat)
is a fantastic DigitalOcean command-line client that has commands to
find the available images, regions, and sizes. For example, to see all the
global images, you can run `tugboat images --global`.

View File

@ -29,28 +29,32 @@ each category, the available configuration keys are alphabetized.
### Required:
* `flavor` (string) - The ID or full URL for the desired flavor for the
* `flavor` (string) - The ID, name, or full URL for the desired flavor for the
server to be created.
* `image_name` (string) - The name of the resulting image.
* `password` (string) - The password used to connect to the OpenStack service.
If not specified, Packer will use the environment variables
`SDK_PASSWORD` or `OS_PASSWORD` (in that order), if set.
* `source_image` (string) - The ID or full URL to the base image to use.
This is the image that will be used to launch a new server and provision it.
Unless you specify completely custom SSH settings, the source image must
have `cloud-init` installed so that the keypair gets assigned properly.
* `username` (string) - The username used to connect to the OpenStack service.
If not specified, Packer will use the environment variable
`OS_USERNAME`, if set.
* `password` (string) - The password used to connect to the OpenStack service.
If not specified, Packer will use the environment variables
`SDK_USERNAME` or `OS_USERNAME` (in that order), if set.
`OS_PASSWORD`, if set.
### Optional:
* `api_key` (string) - The API key used to access OpenStack. Some OpenStack
installations require this.
If not specified, Packer will use the environment variables
`SDK_API_KEY`, if set.
* `availability_zone` (string) - The availability zone to launch the
server in. If this isn't specified, the default enforced by your OpenStack
cluster will be used. This may be required for some OpenStack clusters.
* `floating_ip` (string) - A specific floating IP to assign to this instance.
`use_floating_ip` must also be set to true for this to have an affect.
@ -65,32 +69,18 @@ each category, the available configuration keys are alphabetized.
* `networks` (array of strings) - A list of networks by UUID to attach
to this instance.
* `openstack_provider` (string) - A name of a provider that has a slightly
different API model. Currently supported values are "openstack" (default),
and "rackspace".
* `project` (string) - The project name to boot the instance into. Some
OpenStack installations require this.
If not specified, Packer will use the environment variables
`SDK_PROJECT` or `OS_TENANT_NAME` (in that order), if set.
* `provider` (string) - The provider used to connect to the OpenStack service.
If not specified, Packer will use the environment variables `SDK_PROVIDER`
or `OS_AUTH_URL` (in that order), if set.
For Rackspace this should be `rackspace-us` or `rackspace-uk`.
* `proxy_url` (string)
* `tenant_id` or `tenant_name` (string) - The tenant ID or name to boot the
instance into. Some OpenStack installations require this.
If not specified, Packer will use the environment variable
`OS_TENANT_NAME`, if set.
* `security_groups` (array of strings) - A list of security groups by name
to add to this instance.
* `region` (string) - The name of the region, such as "DFW", in which
to launch the server to create the AMI.
If not specified, Packer will use the environment variables
`SDK_REGION` or `OS_REGION_NAME` (in that order), if set.
For a `provider` of "rackspace", it is required to specify a region,
either using this option or with an environment variable. For other
providers, including a private cloud, specifying a region is optional.
If not specified, Packer will use the environment variable
`OS_REGION_NAME`, if set.
* `ssh_port` (integer) - The port that SSH will be available on. Defaults to port
22.
@ -106,9 +96,6 @@ each category, the available configuration keys are alphabetized.
useful for Rackspace are "public" or "private", and the default behavior is
to connect via whichever is returned first from the OpenStack API.
* `tenant_id` (string) - Tenant ID for accessing OpenStack if your
installation requires this.
* `use_floating_ip` (boolean) - Whether or not to use a floating IP for
the instance. Defaults to false.
@ -124,10 +111,8 @@ Ubuntu 12.04 LTS (Precise Pangolin) on Rackspace OpenStack cloud offering.
```javascript
{
"type": "openstack",
"username": "",
"api_key": "",
"openstack_provider": "rackspace",
"provider": "rackspace-us",
"username": "foo",
"password": "foo",
"region": "DFW",
"ssh_username": "root",
"image_name": "Test image",
@ -160,13 +145,3 @@ script is setting environment variables like:
* `OS_TENANT_ID`
* `OS_USERNAME`
* `OS_PASSWORD`
## Troubleshooting
*I get the error "Missing or incorrect provider"*
* Verify your "username", "password" and "provider" settings.
*I get the error "Missing endpoint, or insufficient privileges to access endpoint"*
* Verify your "region" setting.

View File

@ -179,9 +179,11 @@ each category, the available options are alphabetized and described.
By default this is "output-BUILDNAME" where "BUILDNAME" is the name
of the build.
* `shutdown_command` (string) - The command to use to gracefully shut down
the machine once all the provisioning is done. By default this is an empty
string, which tells Packer to just forcefully shut down the machine.
* `shutdown_command` (string) - The command to use to gracefully shut down the machine once all
the provisioning is done. By default this is an empty string, which tells Packer to just
forcefully shut down the machine unless a shutdown command takes place inside script so this may
safely be omitted. If one or more scripts require a reboot it is suggested to leave this blank
since reboots may fail and specify the final shutdown command in your last script.
* `shutdown_timeout` (string) - The amount of time to wait after executing
the `shutdown_command` for the virtual machine to actually shut down.
@ -209,6 +211,10 @@ each category, the available options are alphabetized and described.
available. By default this is "20m", or 20 minutes. Note that this should
be quite long since the timer begins as soon as the virtual machine is booted.
* `ssh_skip_nat_mapping` (bool) - Defaults to false. When enabled, Packer does
not setup forwarded port mapping for SSH requests and uses `ssh_port` on the
host to communicate to the virtual machine
* `vboxmanage` (array of array of strings) - Custom `VBoxManage` commands to
execute in order to further customize the virtual machine being created.
The value of this is an array of commands to execute. The commands are executed

View File

@ -154,9 +154,11 @@ each category, the available options are alphabetized and described.
By default this is "output-BUILDNAME" where "BUILDNAME" is the name
of the build.
* `shutdown_command` (string) - The command to use to gracefully shut down
the machine once all the provisioning is done. By default this is an empty
string, which tells Packer to just forcefully shut down the machine.
* `shutdown_command` (string) - The command to use to gracefully shut down the machine once all
the provisioning is done. By default this is an empty string, which tells Packer to just
forcefully shut down the machine unless a shutdown command takes place inside script so this may
safely be omitted. If one or more scripts require a reboot it is suggested to leave this blank
since reboots may fail and specify the final shutdown command in your last script.
* `shutdown_timeout` (string) - The amount of time to wait after executing
the `shutdown_command` for the virtual machine to actually shut down.
@ -184,6 +186,10 @@ each category, the available options are alphabetized and described.
available. By default this is "20m", or 20 minutes. Note that this should
be quite long since the timer begins as soon as the virtual machine is booted.
* `ssh_skip_nat_mapping` (bool) - Defaults to false. When enabled, Packer does
not setup forwarded port mapping for SSH requests and uses `ssh_port` on the
host to communicate to the virtual machine
* `vboxmanage` (array of array of strings) - Custom `VBoxManage` commands to
execute in order to further customize the virtual machine being created.
The value of this is an array of commands to execute. The commands are executed

View File

@ -72,6 +72,12 @@ each category, the available options are alphabetized and described.
### Optional:
* `disk_additional_size` (array of integers) - The size(s) of any additional
hard disks for the VM in megabytes. If this is not specified then the VM will
only contain a primary hard disk. The builder uses expandable, not fixed-size
virtual hard disks, so the actual file representing the disk will not use the
full size unless it is full.
* `boot_command` (array of strings) - This is an array of commands to type
when the virtual machine is first booted. The goal of these commands should
be to type just enough to initialize the operating system installer. Special

View File

@ -109,9 +109,11 @@ each category, the available options are alphabetized and described.
By default this is "output-BUILDNAME" where "BUILDNAME" is the name
of the build.
* `shutdown_command` (string) - The command to use to gracefully shut down
the machine once all the provisioning is done. By default this is an empty
string, which tells Packer to just forcefully shut down the machine.
* `shutdown_command` (string) - The command to use to gracefully shut down the machine once all
the provisioning is done. By default this is an empty string, which tells Packer to just
forcefully shut down the machine unless a shutdown command takes place inside script so this may
safely be omitted. If one or more scripts require a reboot it is suggested to leave this blank
since reboots may fail and specify the final shutdown command in your last script.
* `shutdown_timeout` (string) - The amount of time to wait after executing
the `shutdown_command` for the virtual machine to actually shut down.

View File

@ -34,6 +34,9 @@ The example below is fully functional and expects cookbooks in the
The reference of available configuration options is listed below. No
configuration is actually required, but at least `run_list` is recommended.
* `chef_environment` (string) - The name of the `chef_environment` sent to the
Chef server. By default this is empty and will not use an environment
* `config_template` (string) - Path to a template that will be used for
the Chef configuration file. By default Packer only sets configuration
it needs to match the settings set in the provisioner configuration. If

View File

@ -66,8 +66,10 @@ Optional parameters:
* `inline_shebang` (string) - The
[shebang](http://en.wikipedia.org/wiki/Shebang_%28Unix%29) value to use when
running commands specified by `inline`. By default, this is `/bin/sh`.
running commands specified by `inline`. By default, this is `/bin/sh -e`.
If you're not using `inline`, then this configuration has no effect.
**Important:** If you customize this, be sure to include something like
the `-e` flag, otherwise individual steps failing won't fail the provisioner.
* `remote_path` (string) - The path where the script will be uploaded to
in the machine. This defaults to "/tmp/script.sh". This value must be