Merge branch 'master' of https://github.com/mitchellh/packer
49
CHANGELOG.md
|
@ -2,36 +2,63 @@
|
||||||
|
|
||||||
BACKWARDS INCOMPATIBILITIES:
|
BACKWARDS INCOMPATIBILITIES:
|
||||||
|
|
||||||
* Packer now ships as a single binary, including plugins. If you install packer 0.9.0 over a previous packer installation, **you must delete all of the packer-* plugin files** or packer will load out-of-date plugins from disk.
|
* Packer now ships as a single binary, including plugins. If you install
|
||||||
|
packer 0.9.0 over a previous packer installation, **you must delete all of
|
||||||
|
the packer-* plugin files** or packer will load out-of-date plugins from
|
||||||
|
disk.
|
||||||
* Release binaries are now provided via <https://releases.hashicorp.com>.
|
* Release binaries are now provided via <https://releases.hashicorp.com>.
|
||||||
* Packer 0.9.0 is now built with Go 1.5. Future versions will drop support for building with Go 1.4.
|
* Packer 0.9.0 is now built with Go 1.5. Future versions will drop support
|
||||||
|
for building with Go 1.4.
|
||||||
|
|
||||||
FEATURES:
|
FEATURES:
|
||||||
|
|
||||||
* **Artifice post-processor**: Override packer artifacts during post-
|
* **Artifice post-processor**: Override packer artifacts during post-
|
||||||
processing. This allows you to extract artifacts from a packer builder
|
processing. This allows you to extract artifacts from a packer builder and
|
||||||
and use them with other post-processors like compress, docker, and Atlas.
|
use them with other post-processors like compress, docker, and Atlas.
|
||||||
* **New `vmware-esxi` feature**: Packer can now export images from vCloud or vSphere during the build. [GH-1921]
|
* **New `vmware-esxi` feature**: Packer can now export images from vCloud or
|
||||||
|
vSphere during the build. [GH-1921]
|
||||||
|
|
||||||
IMPROVEMENTS:
|
IMPROVEMENTS:
|
||||||
|
|
||||||
* core: Packer plugins are now compiled into the main binary, reducing file size and build times, and making packer easier to install. The overall plugin architecture has not changed and third-party plugins can still be loaded from disk. Please make sure your plugins are up-to-date! [GH-2854]
|
* core: Packer plugins are now compiled into the main binary, reducing file
|
||||||
|
size and build times, and making packer easier to install. The overall
|
||||||
|
plugin architecture has not changed and third-party plugins can still be
|
||||||
|
loaded from disk. Please make sure your plugins are up-to-date! [GH-2854]
|
||||||
* core: Packer now indicates line numbers for template parse errors [GH-2742]
|
* core: Packer now indicates line numbers for template parse errors [GH-2742]
|
||||||
* core: Scripts are executed via `/usr/bin/env bash` instead of `/bin/bash` for broader compatibility. [GH-2913]
|
* core: Scripts are executed via `/usr/bin/env bash` instead of `/bin/bash`
|
||||||
|
for broader compatibility. [GH-2913]
|
||||||
* core: `target_path` for builder downloads can now be specified. [GH-2600]
|
* core: `target_path` for builder downloads can now be specified. [GH-2600]
|
||||||
|
* core: WinRM communicator now supports HTTPS protocol [GH-3061]
|
||||||
* builder/amazon: Add support for `ebs_optimized` [GH-2806]
|
* builder/amazon: Add support for `ebs_optimized` [GH-2806]
|
||||||
* builder/amazon: You can now specify `0` for `spot_price` to switch to on demand instances [GH-2845]
|
* builder/amazon: You can now specify `0` for `spot_price` to switch to on
|
||||||
* builder/google: `account_file` can now be provided as a JSON string [GH-2811]
|
demand instances [GH-2845]
|
||||||
|
* builder/amazon: Added `ap-northeast-2` (Seoul) [GH-3056]
|
||||||
|
* builder/amazon: packer will try to derive the AZ if only a subnet is
|
||||||
|
specified [GH-3037]
|
||||||
|
* builder/digitalocean: doubled instance wait timeouts to power off or
|
||||||
|
shutdown (now 4 minutes) and to complete a snapshot (now 20 minutes)
|
||||||
|
[GH-2939]
|
||||||
|
* builder/google: `account_file` can now be provided as a JSON string
|
||||||
|
[GH-2811]
|
||||||
* builder/google: added support for `preemptible` instances [GH-2982]
|
* builder/google: added support for `preemptible` instances [GH-2982]
|
||||||
|
* builder/google: added support for static external IPs via `address` option
|
||||||
|
[GH-3030]
|
||||||
|
* builder/openstack: added retry on WaitForImage 404 [GH-3009]
|
||||||
* builder/parallels: Improve support for Parallels 11 [GH-2662]
|
* builder/parallels: Improve support for Parallels 11 [GH-2662]
|
||||||
* builder/parallels: Parallels disks are now compacted by default [GH-2731]
|
* builder/parallels: Parallels disks are now compacted by default [GH-2731]
|
||||||
* builder/parallels: Packer will look for Parallels in `/Applications/Parallels Desktop.app` if it is not detected automatically [GH-2839]
|
* builder/parallels: Packer will look for Parallels in
|
||||||
|
`/Applications/Parallels Desktop.app` if it is not detected automatically
|
||||||
|
[GH-2839]
|
||||||
* builder/docker: Now works remote hosts, such as boot2docker [GH-2846]
|
* builder/docker: Now works remote hosts, such as boot2docker [GH-2846]
|
||||||
* builder/qemu: qcow2 images are now compacted by default [GH-2748]
|
* builder/qemu: qcow2 images are now compacted by default [GH-2748]
|
||||||
* builder/qemu: qcow2 images can now be compressed [GH-2748]
|
* builder/qemu: qcow2 images can now be compressed [GH-2748]
|
||||||
* builder/qemu: Now specifies `virtio-scsi` by default [GH-2422]
|
* builder/qemu: Now specifies `virtio-scsi` by default [GH-2422]
|
||||||
* builder/qemu: Now checks for version-specific options [GH-2376]
|
* builder/qemu: Now checks for version-specific options [GH-2376]
|
||||||
* builder/docker-import: Can now import Artifice artifacts [GH-2718]
|
* builder/docker-import: Can now import Artifice artifacts [GH-2718]
|
||||||
|
* builder/vmware-esxi: Now supports private key auth for remote builds via
|
||||||
|
`remote_private_key_file` [GH-2912]
|
||||||
|
* provisioner/chef: Now supports `encrypted_data_bag_secret_path` option
|
||||||
|
[GH-2653]
|
||||||
* provisioner/puppet: Now accepts the `extra_arguments` parameter [GH-2635]
|
* provisioner/puppet: Now accepts the `extra_arguments` parameter [GH-2635]
|
||||||
* post-processor/atlas: Added support for compile ID. [GH-2775]
|
* post-processor/atlas: Added support for compile ID. [GH-2775]
|
||||||
|
|
||||||
|
@ -44,6 +71,8 @@ BUG FIXES:
|
||||||
* builder/amazon: Use snapshot size when volume size is unspecified [GH-2480]
|
* builder/amazon: Use snapshot size when volume size is unspecified [GH-2480]
|
||||||
* builder/parallels: Now supports interpolation in `prlctl_post` [GH-2828]
|
* builder/parallels: Now supports interpolation in `prlctl_post` [GH-2828]
|
||||||
* builder/vmware: `format` option is now read correctly [GH-2892]
|
* builder/vmware: `format` option is now read correctly [GH-2892]
|
||||||
|
* builder/vmware-esxi: Correct endless loop in destroy validation logic
|
||||||
|
[GH-2911]
|
||||||
* provisioner/shell: No longer leaves temp scripts behind [GH-1536]
|
* provisioner/shell: No longer leaves temp scripts behind [GH-1536]
|
||||||
* provisioner/winrm: Now waits for reboot to complete before continuing with provisioning [GH-2568]
|
* provisioner/winrm: Now waits for reboot to complete before continuing with provisioning [GH-2568]
|
||||||
* post-processor/artifice: Fix truncation of files downloaded from Docker. [GH-2793]
|
* post-processor/artifice: Fix truncation of files downloaded from Docker. [GH-2793]
|
||||||
|
|
|
@ -30,6 +30,11 @@ func TestAMIConfigPrepare_regions(t *testing.T) {
|
||||||
t.Fatalf("shouldn't have err: %s", err)
|
t.Fatalf("shouldn't have err: %s", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
c.AMIRegions = listEC2Regions()
|
||||||
|
if err := c.Prepare(nil); err != nil {
|
||||||
|
t.Fatalf("shouldn't have err: %s", err)
|
||||||
|
}
|
||||||
|
|
||||||
c.AMIRegions = []string{"foo"}
|
c.AMIRegions = []string{"foo"}
|
||||||
if err := c.Prepare(nil); err == nil {
|
if err := c.Prepare(nil); err == nil {
|
||||||
t.Fatal("should have error")
|
t.Fatal("should have error")
|
||||||
|
|
|
@ -1,13 +1,26 @@
|
||||||
package common
|
package common
|
||||||
|
|
||||||
// IsValidRegion returns true if the supplied region is a valid AWS
|
func listEC2Regions() []string {
|
||||||
|
return []string{
|
||||||
|
"ap-northeast-1",
|
||||||
|
"ap-northeast-2",
|
||||||
|
"ap-southeast-1",
|
||||||
|
"ap-southeast-2",
|
||||||
|
"cn-north-1",
|
||||||
|
"eu-central-1",
|
||||||
|
"eu-west-1",
|
||||||
|
"sa-east-1",
|
||||||
|
"us-east-1",
|
||||||
|
"us-gov-west-1",
|
||||||
|
"us-west-1",
|
||||||
|
"us-west-2",
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ValidateRegion returns true if the supplied region is a valid AWS
|
||||||
// region and false if it's not.
|
// region and false if it's not.
|
||||||
func ValidateRegion(region string) bool {
|
func ValidateRegion(region string) bool {
|
||||||
var regions = [11]string{"us-east-1", "us-west-2", "us-west-1", "eu-west-1",
|
for _, valid := range listEC2Regions() {
|
||||||
"eu-central-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1",
|
|
||||||
"sa-east-1", "cn-north-1", "us-gov-west-1"}
|
|
||||||
|
|
||||||
for _, valid := range regions {
|
|
||||||
if region == valid {
|
if region == valid {
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
|
@ -72,6 +72,17 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
|
||||||
session := session.New(config)
|
session := session.New(config)
|
||||||
ec2conn := ec2.New(session)
|
ec2conn := ec2.New(session)
|
||||||
|
|
||||||
|
// If the subnet is specified but not the AZ, try to determine the AZ automatically
|
||||||
|
if b.config.SubnetId != "" && b.config.AvailabilityZone == "" {
|
||||||
|
log.Printf("[INFO] Finding AZ for the given subnet '%s'", b.config.SubnetId)
|
||||||
|
resp, err := ec2conn.DescribeSubnets(&ec2.DescribeSubnetsInput{SubnetIds: []*string{&b.config.SubnetId}})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
b.config.AvailabilityZone = *resp.Subnets[0].AvailabilityZone
|
||||||
|
log.Printf("[INFO] AZ found: '%s'", b.config.AvailabilityZone)
|
||||||
|
}
|
||||||
|
|
||||||
// Setup the state bag and initial state for the steps
|
// Setup the state bag and initial state for the steps
|
||||||
state := new(multistep.BasicStateBag)
|
state := new(multistep.BasicStateBag)
|
||||||
state.Put("config", b.config)
|
state.Put("config", b.config)
|
||||||
|
|
|
@ -163,6 +163,17 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
|
||||||
session := session.New(config)
|
session := session.New(config)
|
||||||
ec2conn := ec2.New(session)
|
ec2conn := ec2.New(session)
|
||||||
|
|
||||||
|
// If the subnet is specified but not the AZ, try to determine the AZ automatically
|
||||||
|
if b.config.SubnetId != "" && b.config.AvailabilityZone == "" {
|
||||||
|
log.Printf("[INFO] Finding AZ for the given subnet '%s'", b.config.SubnetId)
|
||||||
|
resp, err := ec2conn.DescribeSubnets(&ec2.DescribeSubnetsInput{SubnetIds: []*string{&b.config.SubnetId}})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
b.config.AvailabilityZone = *resp.Subnets[0].AvailabilityZone
|
||||||
|
log.Printf("[INFO] AZ found: '%s'", b.config.AvailabilityZone)
|
||||||
|
}
|
||||||
|
|
||||||
// Setup the state bag and initial state for the steps
|
// Setup the state bag and initial state for the steps
|
||||||
state := new(multistep.BasicStateBag)
|
state := new(multistep.BasicStateBag)
|
||||||
state.Put("config", &b.config)
|
state.Put("config", &b.config)
|
||||||
|
|
|
@ -50,7 +50,7 @@ func (s *stepPowerOff) Run(state multistep.StateBag) multistep.StepAction {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Wait for the droplet to become unlocked for future steps
|
// Wait for the droplet to become unlocked for future steps
|
||||||
if err := waitForDropletUnlocked(client, dropletId, 2*time.Minute); err != nil {
|
if err := waitForDropletUnlocked(client, dropletId, 4*time.Minute); err != nil {
|
||||||
// If we get an error the first time, actually report it
|
// If we get an error the first time, actually report it
|
||||||
err := fmt.Errorf("Error powering off droplet: %s", err)
|
err := fmt.Errorf("Error powering off droplet: %s", err)
|
||||||
state.Put("error", err)
|
state.Put("error", err)
|
||||||
|
|
|
@ -72,7 +72,7 @@ func (s *stepShutdown) Run(state multistep.StateBag) multistep.StepAction {
|
||||||
return multistep.ActionHalt
|
return multistep.ActionHalt
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := waitForDropletUnlocked(client, dropletId, 2*time.Minute); err != nil {
|
if err := waitForDropletUnlocked(client, dropletId, 4*time.Minute); err != nil {
|
||||||
// If we get an error the first time, actually report it
|
// If we get an error the first time, actually report it
|
||||||
err := fmt.Errorf("Error shutting down droplet: %s", err)
|
err := fmt.Errorf("Error shutting down droplet: %s", err)
|
||||||
state.Put("error", err)
|
state.Put("error", err)
|
||||||
|
|
|
@ -30,8 +30,8 @@ func (s *stepSnapshot) Run(state multistep.StateBag) multistep.StepAction {
|
||||||
|
|
||||||
// Wait for the droplet to become unlocked first. For snapshots
|
// Wait for the droplet to become unlocked first. For snapshots
|
||||||
// this can end up taking quite a long time, so we hardcode this to
|
// this can end up taking quite a long time, so we hardcode this to
|
||||||
// 10 minutes.
|
// 20 minutes.
|
||||||
if err := waitForDropletUnlocked(client, dropletId, 10*time.Minute); err != nil {
|
if err := waitForDropletUnlocked(client, dropletId, 20*time.Minute); err != nil {
|
||||||
// If we get an error the first time, actually report it
|
// If we get an error the first time, actually report it
|
||||||
err := fmt.Errorf("Error shutting down droplet: %s", err)
|
err := fmt.Errorf("Error shutting down droplet: %s", err)
|
||||||
state.Put("error", err)
|
state.Put("error", err)
|
||||||
|
|
|
@ -31,6 +31,7 @@ type Config struct {
|
||||||
MachineType string `mapstructure:"machine_type"`
|
MachineType string `mapstructure:"machine_type"`
|
||||||
Metadata map[string]string `mapstructure:"metadata"`
|
Metadata map[string]string `mapstructure:"metadata"`
|
||||||
Network string `mapstructure:"network"`
|
Network string `mapstructure:"network"`
|
||||||
|
Address string `mapstructure:"address"`
|
||||||
Preemptible bool `mapstructure:"preemptible"`
|
Preemptible bool `mapstructure:"preemptible"`
|
||||||
SourceImage string `mapstructure:"source_image"`
|
SourceImage string `mapstructure:"source_image"`
|
||||||
SourceImageProjectId string `mapstructure:"source_image_project_id"`
|
SourceImageProjectId string `mapstructure:"source_image_project_id"`
|
||||||
|
|
|
@ -47,6 +47,7 @@ type InstanceConfig struct {
|
||||||
Metadata map[string]string
|
Metadata map[string]string
|
||||||
Name string
|
Name string
|
||||||
Network string
|
Network string
|
||||||
|
Address string
|
||||||
Preemptible bool
|
Preemptible bool
|
||||||
Tags []string
|
Tags []string
|
||||||
Zone string
|
Zone string
|
||||||
|
|
|
@ -13,6 +13,7 @@ import (
|
||||||
"golang.org/x/oauth2/google"
|
"golang.org/x/oauth2/google"
|
||||||
"golang.org/x/oauth2/jwt"
|
"golang.org/x/oauth2/jwt"
|
||||||
"google.golang.org/api/compute/v1"
|
"google.golang.org/api/compute/v1"
|
||||||
|
"strings"
|
||||||
)
|
)
|
||||||
|
|
||||||
// driverGCE is a Driver implementation that actually talks to GCE.
|
// driverGCE is a Driver implementation that actually talks to GCE.
|
||||||
|
@ -214,6 +215,23 @@ func (d *driverGCE) RunInstance(c *InstanceConfig) (<-chan error, error) {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// If given a regional ip, get it
|
||||||
|
accessconfig := compute.AccessConfig{
|
||||||
|
Name: "AccessConfig created by Packer",
|
||||||
|
Type: "ONE_TO_ONE_NAT",
|
||||||
|
}
|
||||||
|
|
||||||
|
if c.Address != "" {
|
||||||
|
d.ui.Message(fmt.Sprintf("Looking up address: %s", c.Address))
|
||||||
|
region_url := strings.Split(zone.Region, "/")
|
||||||
|
region := region_url[len(region_url)-1]
|
||||||
|
address, err := d.service.Addresses.Get(d.projectId, region, c.Address).Do()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
accessconfig.NatIP = address.Address
|
||||||
|
}
|
||||||
|
|
||||||
// Build up the metadata
|
// Build up the metadata
|
||||||
metadata := make([]*compute.MetadataItems, len(c.Metadata))
|
metadata := make([]*compute.MetadataItems, len(c.Metadata))
|
||||||
for k, v := range c.Metadata {
|
for k, v := range c.Metadata {
|
||||||
|
@ -247,10 +265,7 @@ func (d *driverGCE) RunInstance(c *InstanceConfig) (<-chan error, error) {
|
||||||
NetworkInterfaces: []*compute.NetworkInterface{
|
NetworkInterfaces: []*compute.NetworkInterface{
|
||||||
&compute.NetworkInterface{
|
&compute.NetworkInterface{
|
||||||
AccessConfigs: []*compute.AccessConfig{
|
AccessConfigs: []*compute.AccessConfig{
|
||||||
&compute.AccessConfig{
|
&accessconfig,
|
||||||
Name: "AccessConfig created by Packer",
|
|
||||||
Type: "ONE_TO_ONE_NAT",
|
|
||||||
},
|
|
||||||
},
|
},
|
||||||
Network: network.SelfLink,
|
Network: network.SelfLink,
|
||||||
},
|
},
|
||||||
|
|
|
@ -59,6 +59,7 @@ func (s *StepCreateInstance) Run(state multistep.StateBag) multistep.StepAction
|
||||||
Metadata: config.getInstanceMetadata(sshPublicKey),
|
Metadata: config.getInstanceMetadata(sshPublicKey),
|
||||||
Name: name,
|
Name: name,
|
||||||
Network: config.Network,
|
Network: config.Network,
|
||||||
|
Address: config.Address,
|
||||||
Preemptible: config.Preemptible,
|
Preemptible: config.Preemptible,
|
||||||
Tags: config.Tags,
|
Tags: config.Tags,
|
||||||
Zone: config.Zone,
|
Zone: config.Zone,
|
||||||
|
|
|
@ -62,12 +62,20 @@ func (s *stepCreateImage) Cleanup(multistep.StateBag) {
|
||||||
|
|
||||||
// WaitForImage waits for the given Image ID to become ready.
|
// WaitForImage waits for the given Image ID to become ready.
|
||||||
func WaitForImage(client *gophercloud.ServiceClient, imageId string) error {
|
func WaitForImage(client *gophercloud.ServiceClient, imageId string) error {
|
||||||
|
maxNumErrors := 10
|
||||||
|
numErrors := 0
|
||||||
|
|
||||||
for {
|
for {
|
||||||
image, err := images.Get(client, imageId).Extract()
|
image, err := images.Get(client, imageId).Extract()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
errCode, ok := err.(*gophercloud.UnexpectedResponseCodeError)
|
errCode, ok := err.(*gophercloud.UnexpectedResponseCodeError)
|
||||||
if ok && errCode.Actual == 500 {
|
if ok && (errCode.Actual == 500 || errCode.Actual == 404) {
|
||||||
log.Printf("[ERROR] 500 error received, will ignore and retry: %s", err)
|
numErrors++
|
||||||
|
if numErrors >= maxNumErrors {
|
||||||
|
log.Printf("[ERROR] Maximum number of errors (%d) reached; failing with: %s", numErrors, err)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
log.Printf("[ERROR] %d error received, will ignore and retry: %s", errCode.Actual, err)
|
||||||
time.Sleep(2 * time.Second)
|
time.Sleep(2 * time.Second)
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
|
@ -23,6 +23,7 @@ type Builder struct {
|
||||||
|
|
||||||
type Config struct {
|
type Config struct {
|
||||||
common.PackerConfig `mapstructure:",squash"`
|
common.PackerConfig `mapstructure:",squash"`
|
||||||
|
common.HTTPConfig `mapstructure:",squash"`
|
||||||
common.ISOConfig `mapstructure:",squash"`
|
common.ISOConfig `mapstructure:",squash"`
|
||||||
parallelscommon.FloppyConfig `mapstructure:",squash"`
|
parallelscommon.FloppyConfig `mapstructure:",squash"`
|
||||||
parallelscommon.OutputConfig `mapstructure:",squash"`
|
parallelscommon.OutputConfig `mapstructure:",squash"`
|
||||||
|
@ -39,9 +40,6 @@ type Config struct {
|
||||||
GuestOSType string `mapstructure:"guest_os_type"`
|
GuestOSType string `mapstructure:"guest_os_type"`
|
||||||
HardDriveInterface string `mapstructure:"hard_drive_interface"`
|
HardDriveInterface string `mapstructure:"hard_drive_interface"`
|
||||||
HostInterfaces []string `mapstructure:"host_interfaces"`
|
HostInterfaces []string `mapstructure:"host_interfaces"`
|
||||||
HTTPDir string `mapstructure:"http_directory"`
|
|
||||||
HTTPPortMin uint `mapstructure:"http_port_min"`
|
|
||||||
HTTPPortMax uint `mapstructure:"http_port_max"`
|
|
||||||
SkipCompaction bool `mapstructure:"skip_compaction"`
|
SkipCompaction bool `mapstructure:"skip_compaction"`
|
||||||
VMName string `mapstructure:"vm_name"`
|
VMName string `mapstructure:"vm_name"`
|
||||||
|
|
||||||
|
@ -77,6 +75,7 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) {
|
||||||
warnings = append(warnings, isoWarnings...)
|
warnings = append(warnings, isoWarnings...)
|
||||||
errs = packer.MultiErrorAppend(errs, isoErrs...)
|
errs = packer.MultiErrorAppend(errs, isoErrs...)
|
||||||
|
|
||||||
|
errs = packer.MultiErrorAppend(errs, b.config.HTTPConfig.Prepare(&b.config.ctx)...)
|
||||||
errs = packer.MultiErrorAppend(errs, b.config.FloppyConfig.Prepare(&b.config.ctx)...)
|
errs = packer.MultiErrorAppend(errs, b.config.FloppyConfig.Prepare(&b.config.ctx)...)
|
||||||
errs = packer.MultiErrorAppend(
|
errs = packer.MultiErrorAppend(
|
||||||
errs, b.config.OutputConfig.Prepare(&b.config.ctx, &b.config.PackerConfig)...)
|
errs, b.config.OutputConfig.Prepare(&b.config.ctx, &b.config.PackerConfig)...)
|
||||||
|
@ -110,14 +109,6 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) {
|
||||||
"Run it to see all available values: `prlctl create x -d list` ")
|
"Run it to see all available values: `prlctl create x -d list` ")
|
||||||
}
|
}
|
||||||
|
|
||||||
if b.config.HTTPPortMin == 0 {
|
|
||||||
b.config.HTTPPortMin = 8000
|
|
||||||
}
|
|
||||||
|
|
||||||
if b.config.HTTPPortMax == 0 {
|
|
||||||
b.config.HTTPPortMax = 9000
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(b.config.HostInterfaces) == 0 {
|
if len(b.config.HostInterfaces) == 0 {
|
||||||
b.config.HostInterfaces = []string{"en0", "en1", "en2", "en3", "en4", "en5", "en6", "en7",
|
b.config.HostInterfaces = []string{"en0", "en1", "en2", "en3", "en4", "en5", "en6", "en7",
|
||||||
"en8", "en9", "ppp0", "ppp1", "ppp2"}
|
"en8", "en9", "ppp0", "ppp1", "ppp2"}
|
||||||
|
@ -132,11 +123,6 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) {
|
||||||
errs, errors.New("hard_drive_interface can only be ide, sata, or scsi"))
|
errs, errors.New("hard_drive_interface can only be ide, sata, or scsi"))
|
||||||
}
|
}
|
||||||
|
|
||||||
if b.config.HTTPPortMin > b.config.HTTPPortMax {
|
|
||||||
errs = packer.MultiErrorAppend(
|
|
||||||
errs, errors.New("http_port_min must be less than http_port_max"))
|
|
||||||
}
|
|
||||||
|
|
||||||
// Warnings
|
// Warnings
|
||||||
if b.config.ShutdownCommand == "" {
|
if b.config.ShutdownCommand == "" {
|
||||||
warnings = append(warnings,
|
warnings = append(warnings,
|
||||||
|
@ -185,7 +171,11 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
|
||||||
&common.StepCreateFloppy{
|
&common.StepCreateFloppy{
|
||||||
Files: b.config.FloppyFiles,
|
Files: b.config.FloppyFiles,
|
||||||
},
|
},
|
||||||
new(stepHTTPServer),
|
&common.StepHTTPServer{
|
||||||
|
HTTPDir: b.config.HTTPDir,
|
||||||
|
HTTPPortMin: b.config.HTTPPortMin,
|
||||||
|
HTTPPortMax: b.config.HTTPPortMax,
|
||||||
|
},
|
||||||
new(stepCreateVM),
|
new(stepCreateVM),
|
||||||
new(stepCreateDisk),
|
new(stepCreateDisk),
|
||||||
new(stepSetBootOrder),
|
new(stepSetBootOrder),
|
||||||
|
|
|
@ -138,45 +138,6 @@ func TestBuilderPrepare_HardDriveInterface(t *testing.T) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestBuilderPrepare_HTTPPort(t *testing.T) {
|
|
||||||
var b Builder
|
|
||||||
config := testConfig()
|
|
||||||
|
|
||||||
// Bad
|
|
||||||
config["http_port_min"] = 1000
|
|
||||||
config["http_port_max"] = 500
|
|
||||||
warns, err := b.Prepare(config)
|
|
||||||
if len(warns) > 0 {
|
|
||||||
t.Fatalf("bad: %#v", warns)
|
|
||||||
}
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("should have error")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Bad
|
|
||||||
config["http_port_min"] = -500
|
|
||||||
b = Builder{}
|
|
||||||
warns, err = b.Prepare(config)
|
|
||||||
if len(warns) > 0 {
|
|
||||||
t.Fatalf("bad: %#v", warns)
|
|
||||||
}
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("should have error")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Good
|
|
||||||
config["http_port_min"] = 500
|
|
||||||
config["http_port_max"] = 1000
|
|
||||||
b = Builder{}
|
|
||||||
warns, err = b.Prepare(config)
|
|
||||||
if len(warns) > 0 {
|
|
||||||
t.Fatalf("bad: %#v", warns)
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("should not have error: %s", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestBuilderPrepare_InvalidKey(t *testing.T) {
|
func TestBuilderPrepare_InvalidKey(t *testing.T) {
|
||||||
var b Builder
|
var b Builder
|
||||||
config := testConfig()
|
config := testConfig()
|
||||||
|
|
|
@ -1,76 +0,0 @@
|
||||||
package iso
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"github.com/mitchellh/multistep"
|
|
||||||
"github.com/mitchellh/packer/packer"
|
|
||||||
"log"
|
|
||||||
"math/rand"
|
|
||||||
"net"
|
|
||||||
"net/http"
|
|
||||||
)
|
|
||||||
|
|
||||||
// This step creates and runs the HTTP server that is serving files from the
|
|
||||||
// directory specified by the 'http_directory` configuration parameter in the
|
|
||||||
// template.
|
|
||||||
//
|
|
||||||
// Uses:
|
|
||||||
// config *config
|
|
||||||
// ui packer.Ui
|
|
||||||
//
|
|
||||||
// Produces:
|
|
||||||
// http_port int - The port the HTTP server started on.
|
|
||||||
type stepHTTPServer struct {
|
|
||||||
l net.Listener
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *stepHTTPServer) Run(state multistep.StateBag) multistep.StepAction {
|
|
||||||
config := state.Get("config").(*Config)
|
|
||||||
ui := state.Get("ui").(packer.Ui)
|
|
||||||
|
|
||||||
var httpPort uint = 0
|
|
||||||
if config.HTTPDir == "" {
|
|
||||||
state.Put("http_port", httpPort)
|
|
||||||
return multistep.ActionContinue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Find an available TCP port for our HTTP server
|
|
||||||
var httpAddr string
|
|
||||||
portRange := int(config.HTTPPortMax - config.HTTPPortMin)
|
|
||||||
for {
|
|
||||||
var err error
|
|
||||||
var offset uint = 0
|
|
||||||
|
|
||||||
if portRange > 0 {
|
|
||||||
// Intn will panic if portRange == 0, so we do a check.
|
|
||||||
offset = uint(rand.Intn(portRange))
|
|
||||||
}
|
|
||||||
|
|
||||||
httpPort = offset + config.HTTPPortMin
|
|
||||||
httpAddr = fmt.Sprintf(":%d", httpPort)
|
|
||||||
log.Printf("Trying port: %d", httpPort)
|
|
||||||
s.l, err = net.Listen("tcp", httpAddr)
|
|
||||||
if err == nil {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
ui.Say(fmt.Sprintf("Starting HTTP server on port %d", httpPort))
|
|
||||||
|
|
||||||
// Start the HTTP server and run it in the background
|
|
||||||
fileServer := http.FileServer(http.Dir(config.HTTPDir))
|
|
||||||
server := &http.Server{Addr: httpAddr, Handler: fileServer}
|
|
||||||
go server.Serve(s.l)
|
|
||||||
|
|
||||||
// Save the address into the state so it can be accessed in the future
|
|
||||||
state.Put("http_port", httpPort)
|
|
||||||
|
|
||||||
return multistep.ActionContinue
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *stepHTTPServer) Cleanup(multistep.StateBag) {
|
|
||||||
if s.l != nil {
|
|
||||||
// Close the listener so that the HTTP server stops
|
|
||||||
s.l.Close()
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -79,6 +79,7 @@ type Builder struct {
|
||||||
|
|
||||||
type Config struct {
|
type Config struct {
|
||||||
common.PackerConfig `mapstructure:",squash"`
|
common.PackerConfig `mapstructure:",squash"`
|
||||||
|
common.HTTPConfig `mapstructure:",squash"`
|
||||||
common.ISOConfig `mapstructure:",squash"`
|
common.ISOConfig `mapstructure:",squash"`
|
||||||
Comm communicator.Config `mapstructure:",squash"`
|
Comm communicator.Config `mapstructure:",squash"`
|
||||||
|
|
||||||
|
@ -94,9 +95,6 @@ type Config struct {
|
||||||
Format string `mapstructure:"format"`
|
Format string `mapstructure:"format"`
|
||||||
Headless bool `mapstructure:"headless"`
|
Headless bool `mapstructure:"headless"`
|
||||||
DiskImage bool `mapstructure:"disk_image"`
|
DiskImage bool `mapstructure:"disk_image"`
|
||||||
HTTPDir string `mapstructure:"http_directory"`
|
|
||||||
HTTPPortMin uint `mapstructure:"http_port_min"`
|
|
||||||
HTTPPortMax uint `mapstructure:"http_port_max"`
|
|
||||||
MachineType string `mapstructure:"machine_type"`
|
MachineType string `mapstructure:"machine_type"`
|
||||||
NetDevice string `mapstructure:"net_device"`
|
NetDevice string `mapstructure:"net_device"`
|
||||||
OutputDir string `mapstructure:"output_directory"`
|
OutputDir string `mapstructure:"output_directory"`
|
||||||
|
@ -160,14 +158,6 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if b.config.HTTPPortMin == 0 {
|
|
||||||
b.config.HTTPPortMin = 8000
|
|
||||||
}
|
|
||||||
|
|
||||||
if b.config.HTTPPortMax == 0 {
|
|
||||||
b.config.HTTPPortMax = 9000
|
|
||||||
}
|
|
||||||
|
|
||||||
if b.config.MachineType == "" {
|
if b.config.MachineType == "" {
|
||||||
b.config.MachineType = "pc"
|
b.config.MachineType = "pc"
|
||||||
}
|
}
|
||||||
|
@ -235,6 +225,7 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) {
|
||||||
warnings = append(warnings, isoWarnings...)
|
warnings = append(warnings, isoWarnings...)
|
||||||
errs = packer.MultiErrorAppend(errs, isoErrs...)
|
errs = packer.MultiErrorAppend(errs, isoErrs...)
|
||||||
|
|
||||||
|
errs = packer.MultiErrorAppend(errs, b.config.HTTPConfig.Prepare(&b.config.ctx)...)
|
||||||
if es := b.config.Comm.Prepare(&b.config.ctx); len(es) > 0 {
|
if es := b.config.Comm.Prepare(&b.config.ctx); len(es) > 0 {
|
||||||
errs = packer.MultiErrorAppend(errs, es...)
|
errs = packer.MultiErrorAppend(errs, es...)
|
||||||
}
|
}
|
||||||
|
@ -274,11 +265,6 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) {
|
||||||
errs, errors.New("unrecognized disk cache type"))
|
errs, errors.New("unrecognized disk cache type"))
|
||||||
}
|
}
|
||||||
|
|
||||||
if b.config.HTTPPortMin > b.config.HTTPPortMax {
|
|
||||||
errs = packer.MultiErrorAppend(
|
|
||||||
errs, errors.New("http_port_min must be less than http_port_max"))
|
|
||||||
}
|
|
||||||
|
|
||||||
if !b.config.PackerForce {
|
if !b.config.PackerForce {
|
||||||
if _, err := os.Stat(b.config.OutputDir); err == nil {
|
if _, err := os.Stat(b.config.OutputDir); err == nil {
|
||||||
errs = packer.MultiErrorAppend(
|
errs = packer.MultiErrorAppend(
|
||||||
|
@ -357,7 +343,11 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
|
||||||
new(stepCreateDisk),
|
new(stepCreateDisk),
|
||||||
new(stepCopyDisk),
|
new(stepCopyDisk),
|
||||||
new(stepResizeDisk),
|
new(stepResizeDisk),
|
||||||
new(stepHTTPServer),
|
&common.StepHTTPServer{
|
||||||
|
HTTPDir: b.config.HTTPDir,
|
||||||
|
HTTPPortMin: b.config.HTTPPortMin,
|
||||||
|
HTTPPortMax: b.config.HTTPPortMax,
|
||||||
|
},
|
||||||
new(stepForwardSSH),
|
new(stepForwardSSH),
|
||||||
new(stepConfigureVNC),
|
new(stepConfigureVNC),
|
||||||
steprun,
|
steprun,
|
||||||
|
|
|
@ -206,45 +206,6 @@ func TestBuilderPrepare_DiskSize(t *testing.T) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestBuilderPrepare_HTTPPort(t *testing.T) {
|
|
||||||
var b Builder
|
|
||||||
config := testConfig()
|
|
||||||
|
|
||||||
// Bad
|
|
||||||
config["http_port_min"] = 1000
|
|
||||||
config["http_port_max"] = 500
|
|
||||||
warns, err := b.Prepare(config)
|
|
||||||
if len(warns) > 0 {
|
|
||||||
t.Fatalf("bad: %#v", warns)
|
|
||||||
}
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("should have error")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Bad
|
|
||||||
config["http_port_min"] = -500
|
|
||||||
b = Builder{}
|
|
||||||
warns, err = b.Prepare(config)
|
|
||||||
if len(warns) > 0 {
|
|
||||||
t.Fatalf("bad: %#v", warns)
|
|
||||||
}
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("should have error")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Good
|
|
||||||
config["http_port_min"] = 500
|
|
||||||
config["http_port_max"] = 1000
|
|
||||||
b = Builder{}
|
|
||||||
warns, err = b.Prepare(config)
|
|
||||||
if len(warns) > 0 {
|
|
||||||
t.Fatalf("bad: %#v", warns)
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("should not have error: %s", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestBuilderPrepare_Format(t *testing.T) {
|
func TestBuilderPrepare_Format(t *testing.T) {
|
||||||
var b Builder
|
var b Builder
|
||||||
config := testConfig()
|
config := testConfig()
|
||||||
|
|
|
@ -1,76 +0,0 @@
|
||||||
package qemu
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"github.com/mitchellh/multistep"
|
|
||||||
"github.com/mitchellh/packer/packer"
|
|
||||||
"log"
|
|
||||||
"math/rand"
|
|
||||||
"net"
|
|
||||||
"net/http"
|
|
||||||
)
|
|
||||||
|
|
||||||
// This step creates and runs the HTTP server that is serving files from the
|
|
||||||
// directory specified by the 'http_directory` configuration parameter in the
|
|
||||||
// template.
|
|
||||||
//
|
|
||||||
// Uses:
|
|
||||||
// config *config
|
|
||||||
// ui packer.Ui
|
|
||||||
//
|
|
||||||
// Produces:
|
|
||||||
// http_port int - The port the HTTP server started on.
|
|
||||||
type stepHTTPServer struct {
|
|
||||||
l net.Listener
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *stepHTTPServer) Run(state multistep.StateBag) multistep.StepAction {
|
|
||||||
config := state.Get("config").(*Config)
|
|
||||||
ui := state.Get("ui").(packer.Ui)
|
|
||||||
|
|
||||||
var httpPort uint = 0
|
|
||||||
if config.HTTPDir == "" {
|
|
||||||
state.Put("http_port", httpPort)
|
|
||||||
return multistep.ActionContinue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Find an available TCP port for our HTTP server
|
|
||||||
var httpAddr string
|
|
||||||
portRange := int(config.HTTPPortMax - config.HTTPPortMin)
|
|
||||||
for {
|
|
||||||
var err error
|
|
||||||
var offset uint = 0
|
|
||||||
|
|
||||||
if portRange > 0 {
|
|
||||||
// Intn will panic if portRange == 0, so we do a check.
|
|
||||||
offset = uint(rand.Intn(portRange))
|
|
||||||
}
|
|
||||||
|
|
||||||
httpPort = offset + config.HTTPPortMin
|
|
||||||
httpAddr = fmt.Sprintf(":%d", httpPort)
|
|
||||||
log.Printf("Trying port: %d", httpPort)
|
|
||||||
s.l, err = net.Listen("tcp", httpAddr)
|
|
||||||
if err == nil {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
ui.Say(fmt.Sprintf("Starting HTTP server on port %d", httpPort))
|
|
||||||
|
|
||||||
// Start the HTTP server and run it in the background
|
|
||||||
fileServer := http.FileServer(http.Dir(config.HTTPDir))
|
|
||||||
server := &http.Server{Addr: httpAddr, Handler: fileServer}
|
|
||||||
go server.Serve(s.l)
|
|
||||||
|
|
||||||
// Save the address into the state so it can be accessed in the future
|
|
||||||
state.Put("http_port", httpPort)
|
|
||||||
|
|
||||||
return multistep.ActionContinue
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *stepHTTPServer) Cleanup(multistep.StateBag) {
|
|
||||||
if s.l != nil {
|
|
||||||
// Close the listener so that the HTTP server stops
|
|
||||||
s.l.Close()
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,7 +1,6 @@
|
||||||
package common
|
package common
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"errors"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
@ -12,10 +11,6 @@ type RunConfig struct {
|
||||||
Headless bool `mapstructure:"headless"`
|
Headless bool `mapstructure:"headless"`
|
||||||
RawBootWait string `mapstructure:"boot_wait"`
|
RawBootWait string `mapstructure:"boot_wait"`
|
||||||
|
|
||||||
HTTPDir string `mapstructure:"http_directory"`
|
|
||||||
HTTPPortMin uint `mapstructure:"http_port_min"`
|
|
||||||
HTTPPortMax uint `mapstructure:"http_port_max"`
|
|
||||||
|
|
||||||
BootWait time.Duration ``
|
BootWait time.Duration ``
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -24,14 +19,6 @@ func (c *RunConfig) Prepare(ctx *interpolate.Context) []error {
|
||||||
c.RawBootWait = "10s"
|
c.RawBootWait = "10s"
|
||||||
}
|
}
|
||||||
|
|
||||||
if c.HTTPPortMin == 0 {
|
|
||||||
c.HTTPPortMin = 8000
|
|
||||||
}
|
|
||||||
|
|
||||||
if c.HTTPPortMax == 0 {
|
|
||||||
c.HTTPPortMax = 9000
|
|
||||||
}
|
|
||||||
|
|
||||||
var errs []error
|
var errs []error
|
||||||
var err error
|
var err error
|
||||||
c.BootWait, err = time.ParseDuration(c.RawBootWait)
|
c.BootWait, err = time.ParseDuration(c.RawBootWait)
|
||||||
|
@ -39,10 +26,5 @@ func (c *RunConfig) Prepare(ctx *interpolate.Context) []error {
|
||||||
errs = append(errs, fmt.Errorf("Failed parsing boot_wait: %s", err))
|
errs = append(errs, fmt.Errorf("Failed parsing boot_wait: %s", err))
|
||||||
}
|
}
|
||||||
|
|
||||||
if c.HTTPPortMin > c.HTTPPortMax {
|
|
||||||
errs = append(errs,
|
|
||||||
errors.New("http_port_min must be less than http_port_max"))
|
|
||||||
}
|
|
||||||
|
|
||||||
return errs
|
return errs
|
||||||
}
|
}
|
||||||
|
|
|
@ -24,6 +24,7 @@ type Builder struct {
|
||||||
|
|
||||||
type Config struct {
|
type Config struct {
|
||||||
common.PackerConfig `mapstructure:",squash"`
|
common.PackerConfig `mapstructure:",squash"`
|
||||||
|
common.HTTPConfig `mapstructure:",squash"`
|
||||||
common.ISOConfig `mapstructure:",squash"`
|
common.ISOConfig `mapstructure:",squash"`
|
||||||
vboxcommon.ExportConfig `mapstructure:",squash"`
|
vboxcommon.ExportConfig `mapstructure:",squash"`
|
||||||
vboxcommon.ExportOpts `mapstructure:",squash"`
|
vboxcommon.ExportOpts `mapstructure:",squash"`
|
||||||
|
@ -81,6 +82,7 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) {
|
||||||
errs = packer.MultiErrorAppend(errs, b.config.FloppyConfig.Prepare(&b.config.ctx)...)
|
errs = packer.MultiErrorAppend(errs, b.config.FloppyConfig.Prepare(&b.config.ctx)...)
|
||||||
errs = packer.MultiErrorAppend(
|
errs = packer.MultiErrorAppend(
|
||||||
errs, b.config.OutputConfig.Prepare(&b.config.ctx, &b.config.PackerConfig)...)
|
errs, b.config.OutputConfig.Prepare(&b.config.ctx, &b.config.PackerConfig)...)
|
||||||
|
errs = packer.MultiErrorAppend(errs, b.config.HTTPConfig.Prepare(&b.config.ctx)...)
|
||||||
errs = packer.MultiErrorAppend(errs, b.config.RunConfig.Prepare(&b.config.ctx)...)
|
errs = packer.MultiErrorAppend(errs, b.config.RunConfig.Prepare(&b.config.ctx)...)
|
||||||
errs = packer.MultiErrorAppend(errs, b.config.ShutdownConfig.Prepare(&b.config.ctx)...)
|
errs = packer.MultiErrorAppend(errs, b.config.ShutdownConfig.Prepare(&b.config.ctx)...)
|
||||||
errs = packer.MultiErrorAppend(errs, b.config.SSHConfig.Prepare(&b.config.ctx)...)
|
errs = packer.MultiErrorAppend(errs, b.config.SSHConfig.Prepare(&b.config.ctx)...)
|
||||||
|
@ -194,7 +196,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
|
||||||
&common.StepCreateFloppy{
|
&common.StepCreateFloppy{
|
||||||
Files: b.config.FloppyFiles,
|
Files: b.config.FloppyFiles,
|
||||||
},
|
},
|
||||||
&vboxcommon.StepHTTPServer{
|
&common.StepHTTPServer{
|
||||||
HTTPDir: b.config.HTTPDir,
|
HTTPDir: b.config.HTTPDir,
|
||||||
HTTPPortMin: b.config.HTTPPortMin,
|
HTTPPortMin: b.config.HTTPPortMin,
|
||||||
HTTPPortMax: b.config.HTTPPortMax,
|
HTTPPortMax: b.config.HTTPPortMax,
|
||||||
|
|
|
@ -260,45 +260,6 @@ func TestBuilderPrepare_HardDriveInterface(t *testing.T) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestBuilderPrepare_HTTPPort(t *testing.T) {
|
|
||||||
var b Builder
|
|
||||||
config := testConfig()
|
|
||||||
|
|
||||||
// Bad
|
|
||||||
config["http_port_min"] = 1000
|
|
||||||
config["http_port_max"] = 500
|
|
||||||
warns, err := b.Prepare(config)
|
|
||||||
if len(warns) > 0 {
|
|
||||||
t.Fatalf("bad: %#v", warns)
|
|
||||||
}
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("should have error")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Bad
|
|
||||||
config["http_port_min"] = -500
|
|
||||||
b = Builder{}
|
|
||||||
warns, err = b.Prepare(config)
|
|
||||||
if len(warns) > 0 {
|
|
||||||
t.Fatalf("bad: %#v", warns)
|
|
||||||
}
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("should have error")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Good
|
|
||||||
config["http_port_min"] = 500
|
|
||||||
config["http_port_max"] = 1000
|
|
||||||
b = Builder{}
|
|
||||||
warns, err = b.Prepare(config)
|
|
||||||
if len(warns) > 0 {
|
|
||||||
t.Fatalf("bad: %#v", warns)
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("should not have error: %s", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestBuilderPrepare_InvalidKey(t *testing.T) {
|
func TestBuilderPrepare_InvalidKey(t *testing.T) {
|
||||||
var b Builder
|
var b Builder
|
||||||
config := testConfig()
|
config := testConfig()
|
||||||
|
|
|
@ -1,76 +0,0 @@
|
||||||
package iso
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"github.com/mitchellh/multistep"
|
|
||||||
"github.com/mitchellh/packer/packer"
|
|
||||||
"log"
|
|
||||||
"math/rand"
|
|
||||||
"net"
|
|
||||||
"net/http"
|
|
||||||
)
|
|
||||||
|
|
||||||
// This step creates and runs the HTTP server that is serving files from the
|
|
||||||
// directory specified by the 'http_directory` configuration parameter in the
|
|
||||||
// template.
|
|
||||||
//
|
|
||||||
// Uses:
|
|
||||||
// config *config
|
|
||||||
// ui packer.Ui
|
|
||||||
//
|
|
||||||
// Produces:
|
|
||||||
// http_port int - The port the HTTP server started on.
|
|
||||||
type stepHTTPServer struct {
|
|
||||||
l net.Listener
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *stepHTTPServer) Run(state multistep.StateBag) multistep.StepAction {
|
|
||||||
config := state.Get("config").(*Config)
|
|
||||||
ui := state.Get("ui").(packer.Ui)
|
|
||||||
|
|
||||||
var httpPort uint = 0
|
|
||||||
if config.HTTPDir == "" {
|
|
||||||
state.Put("http_port", httpPort)
|
|
||||||
return multistep.ActionContinue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Find an available TCP port for our HTTP server
|
|
||||||
var httpAddr string
|
|
||||||
portRange := int(config.HTTPPortMax - config.HTTPPortMin)
|
|
||||||
for {
|
|
||||||
var err error
|
|
||||||
var offset uint = 0
|
|
||||||
|
|
||||||
if portRange > 0 {
|
|
||||||
// Intn will panic if portRange == 0, so we do a check.
|
|
||||||
offset = uint(rand.Intn(portRange))
|
|
||||||
}
|
|
||||||
|
|
||||||
httpPort = offset + config.HTTPPortMin
|
|
||||||
httpAddr = fmt.Sprintf(":%d", httpPort)
|
|
||||||
log.Printf("Trying port: %d", httpPort)
|
|
||||||
s.l, err = net.Listen("tcp", httpAddr)
|
|
||||||
if err == nil {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
ui.Say(fmt.Sprintf("Starting HTTP server on port %d", httpPort))
|
|
||||||
|
|
||||||
// Start the HTTP server and run it in the background
|
|
||||||
fileServer := http.FileServer(http.Dir(config.HTTPDir))
|
|
||||||
server := &http.Server{Addr: httpAddr, Handler: fileServer}
|
|
||||||
go server.Serve(s.l)
|
|
||||||
|
|
||||||
// Save the address into the state so it can be accessed in the future
|
|
||||||
state.Put("http_port", httpPort)
|
|
||||||
|
|
||||||
return multistep.ActionContinue
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *stepHTTPServer) Cleanup(multistep.StateBag) {
|
|
||||||
if s.l != nil {
|
|
||||||
// Close the listener so that the HTTP server stops
|
|
||||||
s.l.Close()
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -57,7 +57,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
|
||||||
&common.StepCreateFloppy{
|
&common.StepCreateFloppy{
|
||||||
Files: b.config.FloppyFiles,
|
Files: b.config.FloppyFiles,
|
||||||
},
|
},
|
||||||
&vboxcommon.StepHTTPServer{
|
&common.StepHTTPServer{
|
||||||
HTTPDir: b.config.HTTPDir,
|
HTTPDir: b.config.HTTPDir,
|
||||||
HTTPPortMin: b.config.HTTPPortMin,
|
HTTPPortMin: b.config.HTTPPortMin,
|
||||||
HTTPPortMax: b.config.HTTPPortMax,
|
HTTPPortMax: b.config.HTTPPortMax,
|
||||||
|
|
|
@ -15,6 +15,7 @@ import (
|
||||||
// Config is the configuration structure for the builder.
|
// Config is the configuration structure for the builder.
|
||||||
type Config struct {
|
type Config struct {
|
||||||
common.PackerConfig `mapstructure:",squash"`
|
common.PackerConfig `mapstructure:",squash"`
|
||||||
|
common.HTTPConfig `mapstructure:",squash"`
|
||||||
vboxcommon.ExportConfig `mapstructure:",squash"`
|
vboxcommon.ExportConfig `mapstructure:",squash"`
|
||||||
vboxcommon.ExportOpts `mapstructure:",squash"`
|
vboxcommon.ExportOpts `mapstructure:",squash"`
|
||||||
vboxcommon.FloppyConfig `mapstructure:",squash"`
|
vboxcommon.FloppyConfig `mapstructure:",squash"`
|
||||||
|
@ -77,6 +78,7 @@ func NewConfig(raws ...interface{}) (*Config, []string, error) {
|
||||||
errs = packer.MultiErrorAppend(errs, c.ExportConfig.Prepare(&c.ctx)...)
|
errs = packer.MultiErrorAppend(errs, c.ExportConfig.Prepare(&c.ctx)...)
|
||||||
errs = packer.MultiErrorAppend(errs, c.ExportOpts.Prepare(&c.ctx)...)
|
errs = packer.MultiErrorAppend(errs, c.ExportOpts.Prepare(&c.ctx)...)
|
||||||
errs = packer.MultiErrorAppend(errs, c.FloppyConfig.Prepare(&c.ctx)...)
|
errs = packer.MultiErrorAppend(errs, c.FloppyConfig.Prepare(&c.ctx)...)
|
||||||
|
errs = packer.MultiErrorAppend(errs, c.HTTPConfig.Prepare(&c.ctx)...)
|
||||||
errs = packer.MultiErrorAppend(errs, c.OutputConfig.Prepare(&c.ctx, &c.PackerConfig)...)
|
errs = packer.MultiErrorAppend(errs, c.OutputConfig.Prepare(&c.ctx, &c.PackerConfig)...)
|
||||||
errs = packer.MultiErrorAppend(errs, c.RunConfig.Prepare(&c.ctx)...)
|
errs = packer.MultiErrorAppend(errs, c.RunConfig.Prepare(&c.ctx)...)
|
||||||
errs = packer.MultiErrorAppend(errs, c.ShutdownConfig.Prepare(&c.ctx)...)
|
errs = packer.MultiErrorAppend(errs, c.ShutdownConfig.Prepare(&c.ctx)...)
|
||||||
|
|
|
@ -1,7 +1,6 @@
|
||||||
package common
|
package common
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"errors"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
@ -12,10 +11,6 @@ type RunConfig struct {
|
||||||
Headless bool `mapstructure:"headless"`
|
Headless bool `mapstructure:"headless"`
|
||||||
RawBootWait string `mapstructure:"boot_wait"`
|
RawBootWait string `mapstructure:"boot_wait"`
|
||||||
|
|
||||||
HTTPDir string `mapstructure:"http_directory"`
|
|
||||||
HTTPPortMin uint `mapstructure:"http_port_min"`
|
|
||||||
HTTPPortMax uint `mapstructure:"http_port_max"`
|
|
||||||
|
|
||||||
VNCPortMin uint `mapstructure:"vnc_port_min"`
|
VNCPortMin uint `mapstructure:"vnc_port_min"`
|
||||||
VNCPortMax uint `mapstructure:"vnc_port_max"`
|
VNCPortMax uint `mapstructure:"vnc_port_max"`
|
||||||
|
|
||||||
|
@ -27,14 +22,6 @@ func (c *RunConfig) Prepare(ctx *interpolate.Context) []error {
|
||||||
c.RawBootWait = "10s"
|
c.RawBootWait = "10s"
|
||||||
}
|
}
|
||||||
|
|
||||||
if c.HTTPPortMin == 0 {
|
|
||||||
c.HTTPPortMin = 8000
|
|
||||||
}
|
|
||||||
|
|
||||||
if c.HTTPPortMax == 0 {
|
|
||||||
c.HTTPPortMax = 9000
|
|
||||||
}
|
|
||||||
|
|
||||||
if c.VNCPortMin == 0 {
|
if c.VNCPortMin == 0 {
|
||||||
c.VNCPortMin = 5900
|
c.VNCPortMin = 5900
|
||||||
}
|
}
|
||||||
|
@ -53,11 +40,6 @@ func (c *RunConfig) Prepare(ctx *interpolate.Context) []error {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if c.HTTPPortMin > c.HTTPPortMax {
|
|
||||||
errs = append(errs,
|
|
||||||
errors.New("http_port_min must be less than http_port_max"))
|
|
||||||
}
|
|
||||||
|
|
||||||
if c.VNCPortMin > c.VNCPortMax {
|
if c.VNCPortMin > c.VNCPortMax {
|
||||||
errs = append(
|
errs = append(
|
||||||
errs, fmt.Errorf("vnc_port_min must be less than vnc_port_max"))
|
errs, fmt.Errorf("vnc_port_min must be less than vnc_port_max"))
|
||||||
|
|
|
@ -1,78 +0,0 @@
|
||||||
package common
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"github.com/mitchellh/multistep"
|
|
||||||
"github.com/mitchellh/packer/packer"
|
|
||||||
"log"
|
|
||||||
"math/rand"
|
|
||||||
"net"
|
|
||||||
"net/http"
|
|
||||||
)
|
|
||||||
|
|
||||||
// This step creates and runs the HTTP server that is serving files from the
|
|
||||||
// directory specified by the 'http_directory` configuration parameter in the
|
|
||||||
// template.
|
|
||||||
//
|
|
||||||
// Uses:
|
|
||||||
// ui packer.Ui
|
|
||||||
//
|
|
||||||
// Produces:
|
|
||||||
// http_port int - The port the HTTP server started on.
|
|
||||||
type StepHTTPServer struct {
|
|
||||||
HTTPDir string
|
|
||||||
HTTPPortMin uint
|
|
||||||
HTTPPortMax uint
|
|
||||||
|
|
||||||
l net.Listener
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *StepHTTPServer) Run(state multistep.StateBag) multistep.StepAction {
|
|
||||||
ui := state.Get("ui").(packer.Ui)
|
|
||||||
|
|
||||||
var httpPort uint = 0
|
|
||||||
if s.HTTPDir == "" {
|
|
||||||
state.Put("http_port", httpPort)
|
|
||||||
return multistep.ActionContinue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Find an available TCP port for our HTTP server
|
|
||||||
var httpAddr string
|
|
||||||
portRange := int(s.HTTPPortMax - s.HTTPPortMin)
|
|
||||||
for {
|
|
||||||
var err error
|
|
||||||
var offset uint = 0
|
|
||||||
|
|
||||||
if portRange > 0 {
|
|
||||||
// Intn will panic if portRange == 0, so we do a check.
|
|
||||||
offset = uint(rand.Intn(portRange))
|
|
||||||
}
|
|
||||||
|
|
||||||
httpPort = offset + s.HTTPPortMin
|
|
||||||
httpAddr = fmt.Sprintf("0.0.0.0:%d", httpPort)
|
|
||||||
log.Printf("Trying port: %d", httpPort)
|
|
||||||
s.l, err = net.Listen("tcp", httpAddr)
|
|
||||||
if err == nil {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
ui.Say(fmt.Sprintf("Starting HTTP server on port %d", httpPort))
|
|
||||||
|
|
||||||
// Start the HTTP server and run it in the background
|
|
||||||
fileServer := http.FileServer(http.Dir(s.HTTPDir))
|
|
||||||
server := &http.Server{Addr: httpAddr, Handler: fileServer}
|
|
||||||
go server.Serve(s.l)
|
|
||||||
|
|
||||||
// Save the address into the state so it can be accessed in the future
|
|
||||||
state.Put("http_port", httpPort)
|
|
||||||
|
|
||||||
return multistep.ActionContinue
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *StepHTTPServer) Cleanup(multistep.StateBag) {
|
|
||||||
if s.l != nil {
|
|
||||||
// Close the listener so that the HTTP server stops
|
|
||||||
s.l.Close()
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -26,6 +26,7 @@ type Builder struct {
|
||||||
|
|
||||||
type Config struct {
|
type Config struct {
|
||||||
common.PackerConfig `mapstructure:",squash"`
|
common.PackerConfig `mapstructure:",squash"`
|
||||||
|
common.HTTPConfig `mapstructure:",squash"`
|
||||||
common.ISOConfig `mapstructure:",squash"`
|
common.ISOConfig `mapstructure:",squash"`
|
||||||
vmwcommon.DriverConfig `mapstructure:",squash"`
|
vmwcommon.DriverConfig `mapstructure:",squash"`
|
||||||
vmwcommon.OutputConfig `mapstructure:",squash"`
|
vmwcommon.OutputConfig `mapstructure:",squash"`
|
||||||
|
@ -57,6 +58,7 @@ type Config struct {
|
||||||
RemotePort uint `mapstructure:"remote_port"`
|
RemotePort uint `mapstructure:"remote_port"`
|
||||||
RemoteUser string `mapstructure:"remote_username"`
|
RemoteUser string `mapstructure:"remote_username"`
|
||||||
RemotePassword string `mapstructure:"remote_password"`
|
RemotePassword string `mapstructure:"remote_password"`
|
||||||
|
RemotePrivateKey string `mapstructure:"remote_private_key_file"`
|
||||||
|
|
||||||
ctx interpolate.Context
|
ctx interpolate.Context
|
||||||
}
|
}
|
||||||
|
@ -83,6 +85,7 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) {
|
||||||
isoWarnings, isoErrs := b.config.ISOConfig.Prepare(&b.config.ctx)
|
isoWarnings, isoErrs := b.config.ISOConfig.Prepare(&b.config.ctx)
|
||||||
warnings = append(warnings, isoWarnings...)
|
warnings = append(warnings, isoWarnings...)
|
||||||
errs = packer.MultiErrorAppend(errs, isoErrs...)
|
errs = packer.MultiErrorAppend(errs, isoErrs...)
|
||||||
|
errs = packer.MultiErrorAppend(errs, b.config.HTTPConfig.Prepare(&b.config.ctx)...)
|
||||||
errs = packer.MultiErrorAppend(errs, b.config.DriverConfig.Prepare(&b.config.ctx)...)
|
errs = packer.MultiErrorAppend(errs, b.config.DriverConfig.Prepare(&b.config.ctx)...)
|
||||||
errs = packer.MultiErrorAppend(errs,
|
errs = packer.MultiErrorAppend(errs,
|
||||||
b.config.OutputConfig.Prepare(&b.config.ctx, &b.config.PackerConfig)...)
|
b.config.OutputConfig.Prepare(&b.config.ctx, &b.config.PackerConfig)...)
|
||||||
|
@ -244,7 +247,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
|
||||||
CustomData: b.config.VMXData,
|
CustomData: b.config.VMXData,
|
||||||
},
|
},
|
||||||
&vmwcommon.StepSuppressMessages{},
|
&vmwcommon.StepSuppressMessages{},
|
||||||
&vmwcommon.StepHTTPServer{
|
&common.StepHTTPServer{
|
||||||
HTTPDir: b.config.HTTPDir,
|
HTTPDir: b.config.HTTPDir,
|
||||||
HTTPPortMin: b.config.HTTPPortMin,
|
HTTPPortMin: b.config.HTTPPortMin,
|
||||||
HTTPPortMax: b.config.HTTPPortMax,
|
HTTPPortMax: b.config.HTTPPortMax,
|
||||||
|
|
|
@ -152,45 +152,6 @@ func TestBuilderPrepare_Format(t *testing.T) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestBuilderPrepare_HTTPPort(t *testing.T) {
|
|
||||||
var b Builder
|
|
||||||
config := testConfig()
|
|
||||||
|
|
||||||
// Bad
|
|
||||||
config["http_port_min"] = 1000
|
|
||||||
config["http_port_max"] = 500
|
|
||||||
warns, err := b.Prepare(config)
|
|
||||||
if len(warns) > 0 {
|
|
||||||
t.Fatalf("bad: %#v", warns)
|
|
||||||
}
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("should have error")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Bad
|
|
||||||
config["http_port_min"] = -500
|
|
||||||
b = Builder{}
|
|
||||||
warns, err = b.Prepare(config)
|
|
||||||
if len(warns) > 0 {
|
|
||||||
t.Fatalf("bad: %#v", warns)
|
|
||||||
}
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("should have error")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Good
|
|
||||||
config["http_port_min"] = 500
|
|
||||||
config["http_port_max"] = 1000
|
|
||||||
b = Builder{}
|
|
||||||
warns, err = b.Prepare(config)
|
|
||||||
if len(warns) > 0 {
|
|
||||||
t.Fatalf("bad: %#v", warns)
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("should not have error: %s", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestBuilderPrepare_InvalidKey(t *testing.T) {
|
func TestBuilderPrepare_InvalidKey(t *testing.T) {
|
||||||
var b Builder
|
var b Builder
|
||||||
config := testConfig()
|
config := testConfig()
|
||||||
|
|
|
@ -21,6 +21,7 @@ func NewDriver(config *Config) (vmwcommon.Driver, error) {
|
||||||
Port: config.RemotePort,
|
Port: config.RemotePort,
|
||||||
Username: config.RemoteUser,
|
Username: config.RemoteUser,
|
||||||
Password: config.RemotePassword,
|
Password: config.RemotePassword,
|
||||||
|
PrivateKey: config.RemotePrivateKey,
|
||||||
Datastore: config.RemoteDatastore,
|
Datastore: config.RemoteDatastore,
|
||||||
CacheDatastore: config.RemoteCacheDatastore,
|
CacheDatastore: config.RemoteCacheDatastore,
|
||||||
CacheDirectory: config.RemoteCacheDirectory,
|
CacheDirectory: config.RemoteCacheDirectory,
|
||||||
|
|
|
@ -15,6 +15,7 @@ import (
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/mitchellh/multistep"
|
"github.com/mitchellh/multistep"
|
||||||
|
commonssh "github.com/mitchellh/packer/common/ssh"
|
||||||
"github.com/mitchellh/packer/communicator/ssh"
|
"github.com/mitchellh/packer/communicator/ssh"
|
||||||
"github.com/mitchellh/packer/packer"
|
"github.com/mitchellh/packer/packer"
|
||||||
gossh "golang.org/x/crypto/ssh"
|
gossh "golang.org/x/crypto/ssh"
|
||||||
|
@ -27,6 +28,7 @@ type ESX5Driver struct {
|
||||||
Port uint
|
Port uint
|
||||||
Username string
|
Username string
|
||||||
Password string
|
Password string
|
||||||
|
PrivateKey string
|
||||||
Datastore string
|
Datastore string
|
||||||
CacheDatastore string
|
CacheDatastore string
|
||||||
CacheDirectory string
|
CacheDirectory string
|
||||||
|
@ -340,7 +342,15 @@ func (d *ESX5Driver) connect() error {
|
||||||
ssh.PasswordKeyboardInteractive(d.Password)),
|
ssh.PasswordKeyboardInteractive(d.Password)),
|
||||||
}
|
}
|
||||||
|
|
||||||
// TODO(dougm) KeyPath support
|
if d.PrivateKey != "" {
|
||||||
|
signer, err := commonssh.FileSigner(d.PrivateKey)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
auth = append(auth, gossh.PublicKeys(signer))
|
||||||
|
}
|
||||||
|
|
||||||
sshConfig := &ssh.Config{
|
sshConfig := &ssh.Config{
|
||||||
Connection: ssh.ConnectFunc("tcp", address),
|
Connection: ssh.ConnectFunc("tcp", address),
|
||||||
SSHConfig: &gossh.ClientConfig{
|
SSHConfig: &gossh.ClientConfig{
|
||||||
|
|
|
@ -57,8 +57,8 @@ func (s *StepRegister) Cleanup(state multistep.StateBag) {
|
||||||
}
|
}
|
||||||
// Wait for the machine to actually destroy
|
// Wait for the machine to actually destroy
|
||||||
for {
|
for {
|
||||||
exists, _ := remoteDriver.IsDestroyed()
|
destroyed, _ := remoteDriver.IsDestroyed()
|
||||||
if !exists {
|
if destroyed {
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
time.Sleep(150 * time.Millisecond)
|
time.Sleep(150 * time.Millisecond)
|
||||||
|
|
|
@ -72,7 +72,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
|
||||||
CustomData: b.config.VMXData,
|
CustomData: b.config.VMXData,
|
||||||
},
|
},
|
||||||
&vmwcommon.StepSuppressMessages{},
|
&vmwcommon.StepSuppressMessages{},
|
||||||
&vmwcommon.StepHTTPServer{
|
&common.StepHTTPServer{
|
||||||
HTTPDir: b.config.HTTPDir,
|
HTTPDir: b.config.HTTPDir,
|
||||||
HTTPPortMin: b.config.HTTPPortMin,
|
HTTPPortMin: b.config.HTTPPortMin,
|
||||||
HTTPPortMax: b.config.HTTPPortMax,
|
HTTPPortMax: b.config.HTTPPortMax,
|
||||||
|
|
|
@ -14,6 +14,7 @@ import (
|
||||||
// Config is the configuration structure for the builder.
|
// Config is the configuration structure for the builder.
|
||||||
type Config struct {
|
type Config struct {
|
||||||
common.PackerConfig `mapstructure:",squash"`
|
common.PackerConfig `mapstructure:",squash"`
|
||||||
|
common.HTTPConfig `mapstructure:",squash"`
|
||||||
vmwcommon.DriverConfig `mapstructure:",squash"`
|
vmwcommon.DriverConfig `mapstructure:",squash"`
|
||||||
vmwcommon.OutputConfig `mapstructure:",squash"`
|
vmwcommon.OutputConfig `mapstructure:",squash"`
|
||||||
vmwcommon.RunConfig `mapstructure:",squash"`
|
vmwcommon.RunConfig `mapstructure:",squash"`
|
||||||
|
@ -56,6 +57,7 @@ func NewConfig(raws ...interface{}) (*Config, []string, error) {
|
||||||
// Prepare the errors
|
// Prepare the errors
|
||||||
var errs *packer.MultiError
|
var errs *packer.MultiError
|
||||||
errs = packer.MultiErrorAppend(errs, c.DriverConfig.Prepare(&c.ctx)...)
|
errs = packer.MultiErrorAppend(errs, c.DriverConfig.Prepare(&c.ctx)...)
|
||||||
|
errs = packer.MultiErrorAppend(errs, c.HTTPConfig.Prepare(&c.ctx)...)
|
||||||
errs = packer.MultiErrorAppend(errs, c.OutputConfig.Prepare(&c.ctx, &c.PackerConfig)...)
|
errs = packer.MultiErrorAppend(errs, c.OutputConfig.Prepare(&c.ctx, &c.PackerConfig)...)
|
||||||
errs = packer.MultiErrorAppend(errs, c.RunConfig.Prepare(&c.ctx)...)
|
errs = packer.MultiErrorAppend(errs, c.RunConfig.Prepare(&c.ctx)...)
|
||||||
errs = packer.MultiErrorAppend(errs, c.ShutdownConfig.Prepare(&c.ctx)...)
|
errs = packer.MultiErrorAppend(errs, c.ShutdownConfig.Prepare(&c.ctx)...)
|
||||||
|
|
|
@ -53,6 +53,10 @@ func (c *PushCommand) Run(args []string) int {
|
||||||
return 1
|
return 1
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if message != "" {
|
||||||
|
c.Ui.Warn("[DEPRECATED] -m/-message is deprecated and will be removed in a future Packer release")
|
||||||
|
}
|
||||||
|
|
||||||
args = f.Args()
|
args = f.Args()
|
||||||
if len(args) != 1 {
|
if len(args) != 1 {
|
||||||
f.Usage()
|
f.Usage()
|
||||||
|
@ -268,9 +272,6 @@ Usage: packer push [options] TEMPLATE
|
||||||
|
|
||||||
Options:
|
Options:
|
||||||
|
|
||||||
-m, -message=<detail> A message to identify the purpose or changes in this
|
|
||||||
Packer template much like a VCS commit message
|
|
||||||
|
|
||||||
-name=<name> The destination build in Atlas. This is in a format
|
-name=<name> The destination build in Atlas. This is in a format
|
||||||
"username/name".
|
"username/name".
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,34 @@
|
||||||
|
package common
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
|
||||||
|
"github.com/mitchellh/packer/template/interpolate"
|
||||||
|
)
|
||||||
|
|
||||||
|
// HTTPConfig contains configuration for the local HTTP Server
|
||||||
|
type HTTPConfig struct {
|
||||||
|
HTTPDir string `mapstructure:"http_directory"`
|
||||||
|
HTTPPortMin uint `mapstructure:"http_port_min"`
|
||||||
|
HTTPPortMax uint `mapstructure:"http_port_max"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *HTTPConfig) Prepare(ctx *interpolate.Context) []error {
|
||||||
|
// Validation
|
||||||
|
var errs []error
|
||||||
|
|
||||||
|
if c.HTTPPortMin == 0 {
|
||||||
|
c.HTTPPortMin = 8000
|
||||||
|
}
|
||||||
|
|
||||||
|
if c.HTTPPortMax == 0 {
|
||||||
|
c.HTTPPortMax = 9000
|
||||||
|
}
|
||||||
|
|
||||||
|
if c.HTTPPortMin > c.HTTPPortMax {
|
||||||
|
errs = append(errs,
|
||||||
|
errors.New("http_port_min must be less than http_port_max"))
|
||||||
|
}
|
||||||
|
|
||||||
|
return errs
|
||||||
|
}
|
|
@ -0,0 +1,45 @@
|
||||||
|
package common
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestHTTPConfigPrepare_Bounds(t *testing.T) {
|
||||||
|
// Test bad
|
||||||
|
h := HTTPConfig{
|
||||||
|
HTTPPortMin: 1000,
|
||||||
|
HTTPPortMax: 500,
|
||||||
|
}
|
||||||
|
err := h.Prepare(nil)
|
||||||
|
if err == nil {
|
||||||
|
t.Fatal("should have error")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test good
|
||||||
|
h = HTTPConfig{
|
||||||
|
HTTPPortMin: 0,
|
||||||
|
HTTPPortMax: 0,
|
||||||
|
}
|
||||||
|
err = h.Prepare(nil)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("should not have error: %s", err)
|
||||||
|
}
|
||||||
|
portMin := uint(8000)
|
||||||
|
if h.HTTPPortMin != portMin {
|
||||||
|
t.Fatalf("HTTPPortMin: expected %d got %d", portMin, h.HTTPPortMin)
|
||||||
|
}
|
||||||
|
portMax := uint(9000)
|
||||||
|
if h.HTTPPortMax != portMax {
|
||||||
|
t.Fatalf("HTTPPortMax: expected %d got %d", portMax, h.HTTPPortMax)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test good
|
||||||
|
h = HTTPConfig{
|
||||||
|
HTTPPortMin: 500,
|
||||||
|
HTTPPortMax: 1000,
|
||||||
|
}
|
||||||
|
err = h.Prepare(nil)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("should not have error: %s", err)
|
||||||
|
}
|
||||||
|
}
|
|
@ -26,8 +26,10 @@ type Communicator struct {
|
||||||
// New creates a new communicator implementation over WinRM.
|
// New creates a new communicator implementation over WinRM.
|
||||||
func New(config *Config) (*Communicator, error) {
|
func New(config *Config) (*Communicator, error) {
|
||||||
endpoint := &winrm.Endpoint{
|
endpoint := &winrm.Endpoint{
|
||||||
Host: config.Host,
|
Host: config.Host,
|
||||||
Port: config.Port,
|
Port: config.Port,
|
||||||
|
HTTPS: config.Https,
|
||||||
|
Insecure: config.Insecure,
|
||||||
|
|
||||||
/*
|
/*
|
||||||
TODO
|
TODO
|
||||||
|
@ -145,6 +147,8 @@ func (c *Communicator) newCopyClient() (*winrmcp.Winrmcp, error) {
|
||||||
User: c.config.Username,
|
User: c.config.Username,
|
||||||
Password: c.config.Password,
|
Password: c.config.Password,
|
||||||
},
|
},
|
||||||
|
Https: c.config.Https,
|
||||||
|
Insecure: c.config.Insecure,
|
||||||
OperationTimeout: c.config.Timeout,
|
OperationTimeout: c.config.Timeout,
|
||||||
MaxOperationsPerShell: 15, // lowest common denominator
|
MaxOperationsPerShell: 15, // lowest common denominator
|
||||||
})
|
})
|
||||||
|
|
|
@ -11,4 +11,6 @@ type Config struct {
|
||||||
Username string
|
Username string
|
||||||
Password string
|
Password string
|
||||||
Timeout time.Duration
|
Timeout time.Duration
|
||||||
|
Https bool
|
||||||
|
Insecure bool
|
||||||
}
|
}
|
||||||
|
|
|
@ -36,6 +36,8 @@ type Config struct {
|
||||||
WinRMHost string `mapstructure:"winrm_host"`
|
WinRMHost string `mapstructure:"winrm_host"`
|
||||||
WinRMPort int `mapstructure:"winrm_port"`
|
WinRMPort int `mapstructure:"winrm_port"`
|
||||||
WinRMTimeout time.Duration `mapstructure:"winrm_timeout"`
|
WinRMTimeout time.Duration `mapstructure:"winrm_timeout"`
|
||||||
|
WinRMUseSSL bool `mapstructure:"winrm_use_ssl"`
|
||||||
|
WinRMInsecure bool `mapstructure:"winrm_insecure"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// Port returns the port that will be used for access based on config.
|
// Port returns the port that will be used for access based on config.
|
||||||
|
|
|
@ -129,6 +129,8 @@ func (s *StepConnectWinRM) waitForWinRM(state multistep.StateBag, cancel <-chan
|
||||||
Username: user,
|
Username: user,
|
||||||
Password: password,
|
Password: password,
|
||||||
Timeout: s.Config.WinRMTimeout,
|
Timeout: s.Config.WinRMTimeout,
|
||||||
|
Https: s.Config.WinRMUseSSL,
|
||||||
|
Insecure: s.Config.WinRMInsecure,
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Printf("[ERROR] WinRM connection err: %s", err)
|
log.Printf("[ERROR] WinRM connection err: %s", err)
|
||||||
|
|
|
@ -22,23 +22,24 @@ import (
|
||||||
type Config struct {
|
type Config struct {
|
||||||
common.PackerConfig `mapstructure:",squash"`
|
common.PackerConfig `mapstructure:",squash"`
|
||||||
|
|
||||||
ChefEnvironment string `mapstructure:"chef_environment"`
|
ChefEnvironment string `mapstructure:"chef_environment"`
|
||||||
SslVerifyMode string `mapstructure:"ssl_verify_mode"`
|
EncryptedDataBagSecretPath string `mapstructure:"encrypted_data_bag_secret_path"`
|
||||||
ConfigTemplate string `mapstructure:"config_template"`
|
SslVerifyMode string `mapstructure:"ssl_verify_mode"`
|
||||||
ExecuteCommand string `mapstructure:"execute_command"`
|
ConfigTemplate string `mapstructure:"config_template"`
|
||||||
InstallCommand string `mapstructure:"install_command"`
|
ExecuteCommand string `mapstructure:"execute_command"`
|
||||||
Json map[string]interface{}
|
InstallCommand string `mapstructure:"install_command"`
|
||||||
NodeName string `mapstructure:"node_name"`
|
Json map[string]interface{}
|
||||||
PreventSudo bool `mapstructure:"prevent_sudo"`
|
NodeName string `mapstructure:"node_name"`
|
||||||
RunList []string `mapstructure:"run_list"`
|
PreventSudo bool `mapstructure:"prevent_sudo"`
|
||||||
ServerUrl string `mapstructure:"server_url"`
|
RunList []string `mapstructure:"run_list"`
|
||||||
SkipCleanClient bool `mapstructure:"skip_clean_client"`
|
ServerUrl string `mapstructure:"server_url"`
|
||||||
SkipCleanNode bool `mapstructure:"skip_clean_node"`
|
SkipCleanClient bool `mapstructure:"skip_clean_client"`
|
||||||
SkipInstall bool `mapstructure:"skip_install"`
|
SkipCleanNode bool `mapstructure:"skip_clean_node"`
|
||||||
StagingDir string `mapstructure:"staging_directory"`
|
SkipInstall bool `mapstructure:"skip_install"`
|
||||||
ClientKey string `mapstructure:"client_key"`
|
StagingDir string `mapstructure:"staging_directory"`
|
||||||
ValidationKeyPath string `mapstructure:"validation_key_path"`
|
ClientKey string `mapstructure:"client_key"`
|
||||||
ValidationClientName string `mapstructure:"validation_client_name"`
|
ValidationKeyPath string `mapstructure:"validation_key_path"`
|
||||||
|
ValidationClientName string `mapstructure:"validation_client_name"`
|
||||||
|
|
||||||
ctx interpolate.Context
|
ctx interpolate.Context
|
||||||
}
|
}
|
||||||
|
@ -48,13 +49,15 @@ type Provisioner struct {
|
||||||
}
|
}
|
||||||
|
|
||||||
type ConfigTemplate struct {
|
type ConfigTemplate struct {
|
||||||
NodeName string
|
NodeName string
|
||||||
ServerUrl string
|
ServerUrl string
|
||||||
ClientKey string
|
ClientKey string
|
||||||
ValidationKeyPath string
|
ValidationKeyPath string
|
||||||
ValidationClientName string
|
ValidationClientName string
|
||||||
ChefEnvironment string
|
EncryptedDataBagSecretPath string
|
||||||
SslVerifyMode string
|
ChefEnvironment string
|
||||||
|
SslVerifyMode string
|
||||||
|
HasEncryptedDataBagSecretPath bool
|
||||||
}
|
}
|
||||||
|
|
||||||
type ExecuteTemplate struct {
|
type ExecuteTemplate struct {
|
||||||
|
@ -118,6 +121,15 @@ func (p *Provisioner) Prepare(raws ...interface{}) error {
|
||||||
errs, fmt.Errorf("server_url must be set"))
|
errs, fmt.Errorf("server_url must be set"))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if p.config.EncryptedDataBagSecretPath != "" {
|
||||||
|
pFileInfo, err := os.Stat(p.config.EncryptedDataBagSecretPath)
|
||||||
|
|
||||||
|
if err != nil || pFileInfo.IsDir() {
|
||||||
|
errs = packer.MultiErrorAppend(
|
||||||
|
errs, fmt.Errorf("Bad encrypted data bag secret '%s': %s", p.config.EncryptedDataBagSecretPath, err))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
jsonValid := true
|
jsonValid := true
|
||||||
for k, v := range p.config.Json {
|
for k, v := range p.config.Json {
|
||||||
p.config.Json[k], err = p.deepJsonFix(k, v)
|
p.config.Json[k], err = p.deepJsonFix(k, v)
|
||||||
|
@ -175,8 +187,16 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
encryptedDataBagSecretPath := ""
|
||||||
|
if p.config.EncryptedDataBagSecretPath != "" {
|
||||||
|
encryptedDataBagSecretPath = fmt.Sprintf("%s/encrypted_data_bag_secret", p.config.StagingDir)
|
||||||
|
if err := p.uploadFile(ui, comm, encryptedDataBagSecretPath, p.config.EncryptedDataBagSecretPath); err != nil {
|
||||||
|
return fmt.Errorf("Error uploading encrypted data bag secret: %s", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
configPath, err := p.createConfig(
|
configPath, err := p.createConfig(
|
||||||
ui, comm, nodeName, serverUrl, p.config.ClientKey, remoteValidationKeyPath, p.config.ValidationClientName, p.config.ChefEnvironment, p.config.SslVerifyMode)
|
ui, comm, nodeName, serverUrl, p.config.ClientKey, remoteValidationKeyPath, p.config.ValidationClientName, encryptedDataBagSecretPath, p.config.ChefEnvironment, p.config.SslVerifyMode)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("Error creating Chef config file: %s", err)
|
return fmt.Errorf("Error creating Chef config file: %s", err)
|
||||||
}
|
}
|
||||||
|
@ -236,7 +256,17 @@ func (p *Provisioner) uploadDirectory(ui packer.Ui, comm packer.Communicator, ds
|
||||||
return comm.UploadDir(dst, src, nil)
|
return comm.UploadDir(dst, src, nil)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *Provisioner) createConfig(ui packer.Ui, comm packer.Communicator, nodeName string, serverUrl string, clientKey string, remoteKeyPath string, validationClientName string, chefEnvironment string, sslVerifyMode string) (string, error) {
|
func (p *Provisioner) uploadFile(ui packer.Ui, comm packer.Communicator, dst string, src string) error {
|
||||||
|
f, err := os.Open(src)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer f.Close()
|
||||||
|
|
||||||
|
return comm.Upload(dst, f, nil)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *Provisioner) createConfig(ui packer.Ui, comm packer.Communicator, nodeName string, serverUrl string, clientKey string, remoteKeyPath string, validationClientName string, encryptedDataBagSecretPath string, chefEnvironment string, sslVerifyMode string) (string, error) {
|
||||||
ui.Message("Creating configuration file 'client.rb'")
|
ui.Message("Creating configuration file 'client.rb'")
|
||||||
|
|
||||||
// Read the template
|
// Read the template
|
||||||
|
@ -258,13 +288,15 @@ func (p *Provisioner) createConfig(ui packer.Ui, comm packer.Communicator, nodeN
|
||||||
|
|
||||||
ctx := p.config.ctx
|
ctx := p.config.ctx
|
||||||
ctx.Data = &ConfigTemplate{
|
ctx.Data = &ConfigTemplate{
|
||||||
NodeName: nodeName,
|
NodeName: nodeName,
|
||||||
ServerUrl: serverUrl,
|
ServerUrl: serverUrl,
|
||||||
ClientKey: clientKey,
|
ClientKey: clientKey,
|
||||||
ValidationKeyPath: remoteKeyPath,
|
ValidationKeyPath: remoteKeyPath,
|
||||||
ValidationClientName: validationClientName,
|
ValidationClientName: validationClientName,
|
||||||
ChefEnvironment: chefEnvironment,
|
ChefEnvironment: chefEnvironment,
|
||||||
SslVerifyMode: sslVerifyMode,
|
SslVerifyMode: sslVerifyMode,
|
||||||
|
EncryptedDataBagSecretPath: encryptedDataBagSecretPath,
|
||||||
|
HasEncryptedDataBagSecretPath: encryptedDataBagSecretPath != "",
|
||||||
}
|
}
|
||||||
configString, err := interpolate.Render(tpl, &ctx)
|
configString, err := interpolate.Render(tpl, &ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@ -587,6 +619,9 @@ log_level :info
|
||||||
log_location STDOUT
|
log_location STDOUT
|
||||||
chef_server_url "{{.ServerUrl}}"
|
chef_server_url "{{.ServerUrl}}"
|
||||||
client_key "{{.ClientKey}}"
|
client_key "{{.ClientKey}}"
|
||||||
|
{{if .HasEncryptedDataBagSecretPath}}
|
||||||
|
encrypted_data_bag_secret "{{.EncryptedDataBagSecretPath}}"
|
||||||
|
{{end}}
|
||||||
{{if ne .ValidationClientName ""}}
|
{{if ne .ValidationClientName ""}}
|
||||||
validation_client_name "{{.ValidationClientName}}"
|
validation_client_name "{{.ValidationClientName}}"
|
||||||
{{else}}
|
{{else}}
|
||||||
|
|
|
@ -138,6 +138,49 @@ func TestProvisionerPrepare_serverUrl(t *testing.T) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestProvisionerPrepare_encryptedDataBagSecretPath(t *testing.T) {
|
||||||
|
var err error
|
||||||
|
var p Provisioner
|
||||||
|
|
||||||
|
// Test no config template
|
||||||
|
config := testConfig()
|
||||||
|
delete(config, "encrypted_data_bag_secret_path")
|
||||||
|
err = p.Prepare(config)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("err: %s", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test with a file
|
||||||
|
tf, err := ioutil.TempFile("", "packer")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("err: %s", err)
|
||||||
|
}
|
||||||
|
defer os.Remove(tf.Name())
|
||||||
|
|
||||||
|
config = testConfig()
|
||||||
|
config["encrypted_data_bag_secret_path"] = tf.Name()
|
||||||
|
p = Provisioner{}
|
||||||
|
err = p.Prepare(config)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("err: %s", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test with a directory
|
||||||
|
td, err := ioutil.TempDir("", "packer")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("err: %s", err)
|
||||||
|
}
|
||||||
|
defer os.RemoveAll(td)
|
||||||
|
|
||||||
|
config = testConfig()
|
||||||
|
config["encrypted_data_bag_secret_path"] = td
|
||||||
|
p = Provisioner{}
|
||||||
|
err = p.Prepare(config)
|
||||||
|
if err == nil {
|
||||||
|
t.Fatal("should have err")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func TestProvisioner_createDir(t *testing.T) {
|
func TestProvisioner_createDir(t *testing.T) {
|
||||||
p1 := &Provisioner{config: Config{PreventSudo: true}}
|
p1 := &Provisioner{config: Config{PreventSudo: true}}
|
||||||
p2 := &Provisioner{config: Config{PreventSudo: false}}
|
p2 := &Provisioner{config: Config{PreventSudo: false}}
|
||||||
|
|
|
@ -1,33 +0,0 @@
|
||||||
#!/usr/bin/env bash
|
|
||||||
set -e
|
|
||||||
|
|
||||||
# Get the parent directory of where this script is.
|
|
||||||
SOURCE="${BASH_SOURCE[0]}"
|
|
||||||
while [ -h "$SOURCE" ] ; do SOURCE="$(readlink "$SOURCE")"; done
|
|
||||||
DIR="$( cd -P "$( dirname "$SOURCE" )/.." && pwd )"
|
|
||||||
|
|
||||||
# Change into that dir because we expect that
|
|
||||||
cd $DIR
|
|
||||||
|
|
||||||
# Get the version from the command line
|
|
||||||
VERSION=$1
|
|
||||||
if [ -z $VERSION ]; then
|
|
||||||
echo "Please specify a version."
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Make sure we have a bintray API key
|
|
||||||
if [ -z $BINTRAY_API_KEY ]; then
|
|
||||||
echo "Please set your bintray API key in the BINTRAY_API_KEY env var."
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
for ARCHIVE in ./pkg/dist/*; do
|
|
||||||
ARCHIVE_NAME=$(basename ${ARCHIVE})
|
|
||||||
|
|
||||||
echo Uploading: $ARCHIVE_NAME
|
|
||||||
curl \
|
|
||||||
-T ${ARCHIVE} \
|
|
||||||
-umitchellh:${BINTRAY_API_KEY} \
|
|
||||||
"https://api.bintray.com/content/mitchellh/packer/packer/${VERSION}/${ARCHIVE_NAME}"
|
|
||||||
done
|
|
|
@ -1,40 +0,0 @@
|
||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
# Set the tmpdir
|
|
||||||
if [ -z "$TMPDIR" ]; then
|
|
||||||
TMPDIR="/tmp"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Create a temporary build dir and make sure we clean it up. For
|
|
||||||
# debugging, comment out the trap line.
|
|
||||||
DEPLOY=`mktemp -d $TMPDIR/packer-www-XXXXXX`
|
|
||||||
trap "rm -rf $DEPLOY" INT TERM EXIT
|
|
||||||
|
|
||||||
# Get the parent directory of where this script is.
|
|
||||||
SOURCE="${BASH_SOURCE[0]}"
|
|
||||||
while [ -h "$SOURCE" ] ; do SOURCE="$(readlink "$SOURCE")"; done
|
|
||||||
DIR="$( cd -P "$( dirname "$SOURCE" )/.." && pwd )"
|
|
||||||
|
|
||||||
# Copy into tmpdir
|
|
||||||
shopt -s dotglob
|
|
||||||
cp -r $DIR/website/* $DEPLOY/
|
|
||||||
|
|
||||||
# Change into that directory
|
|
||||||
pushd $DEPLOY &>/dev/null
|
|
||||||
|
|
||||||
# Ignore some stuff
|
|
||||||
touch .gitignore
|
|
||||||
echo ".sass-cache" >> .gitignore
|
|
||||||
echo "build" >> .gitignore
|
|
||||||
echo "vendor" >> .gitignore
|
|
||||||
|
|
||||||
# Add everything
|
|
||||||
git init -q .
|
|
||||||
git add .
|
|
||||||
git commit -q -m "Deploy by $USER"
|
|
||||||
|
|
||||||
git remote add heroku git@heroku.com:packer-www.git
|
|
||||||
git push -f heroku master
|
|
||||||
|
|
||||||
# Go back to our root
|
|
||||||
popd &>/dev/null
|
|
|
@ -1,2 +0,0 @@
|
||||||
https://github.com/heroku/heroku-buildpack-ruby.git
|
|
||||||
https://github.com/hashicorp/heroku-buildpack-middleman.git
|
|
|
@ -1 +0,0 @@
|
||||||
2.2.2
|
|
|
@ -1,7 +1,5 @@
|
||||||
source "https://rubygems.org"
|
source "https://rubygems.org"
|
||||||
|
|
||||||
ruby "2.2.2"
|
|
||||||
|
|
||||||
gem "middleman-hashicorp", github: "hashicorp/middleman-hashicorp"
|
gem "middleman-hashicorp", github: "hashicorp/middleman-hashicorp"
|
||||||
gem "middleman-breadcrumbs"
|
gem "middleman-breadcrumbs"
|
||||||
gem "htmlbeautifier"
|
gem "htmlbeautifier"
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
GIT
|
GIT
|
||||||
remote: git://github.com/hashicorp/middleman-hashicorp.git
|
remote: git://github.com/hashicorp/middleman-hashicorp.git
|
||||||
revision: 15cbda0cf1d963fa71292dee921229e7ee618272
|
revision: 953baf8762b915cf57553bcc82bc946ad777056f
|
||||||
specs:
|
specs:
|
||||||
middleman-hashicorp (0.2.0)
|
middleman-hashicorp (0.2.0)
|
||||||
bootstrap-sass (~> 3.3)
|
bootstrap-sass (~> 3.3)
|
||||||
|
@ -21,18 +21,18 @@ GIT
|
||||||
GEM
|
GEM
|
||||||
remote: https://rubygems.org/
|
remote: https://rubygems.org/
|
||||||
specs:
|
specs:
|
||||||
activesupport (4.2.4)
|
activesupport (4.2.5)
|
||||||
i18n (~> 0.7)
|
i18n (~> 0.7)
|
||||||
json (~> 1.7, >= 1.7.7)
|
json (~> 1.7, >= 1.7.7)
|
||||||
minitest (~> 5.1)
|
minitest (~> 5.1)
|
||||||
thread_safe (~> 0.3, >= 0.3.4)
|
thread_safe (~> 0.3, >= 0.3.4)
|
||||||
tzinfo (~> 1.1)
|
tzinfo (~> 1.1)
|
||||||
autoprefixer-rails (6.0.3)
|
autoprefixer-rails (6.2.3)
|
||||||
execjs
|
execjs
|
||||||
json
|
json
|
||||||
bootstrap-sass (3.3.5.1)
|
bootstrap-sass (3.3.6)
|
||||||
autoprefixer-rails (>= 5.0.0.1)
|
autoprefixer-rails (>= 5.2.1)
|
||||||
sass (>= 3.3.0)
|
sass (>= 3.3.4)
|
||||||
builder (3.2.2)
|
builder (3.2.2)
|
||||||
capybara (2.4.4)
|
capybara (2.4.4)
|
||||||
mime-types (>= 1.16)
|
mime-types (>= 1.16)
|
||||||
|
@ -40,11 +40,11 @@ GEM
|
||||||
rack (>= 1.0.0)
|
rack (>= 1.0.0)
|
||||||
rack-test (>= 0.5.4)
|
rack-test (>= 0.5.4)
|
||||||
xpath (~> 2.0)
|
xpath (~> 2.0)
|
||||||
chunky_png (1.3.4)
|
chunky_png (1.3.5)
|
||||||
coffee-script (2.4.1)
|
coffee-script (2.4.1)
|
||||||
coffee-script-source
|
coffee-script-source
|
||||||
execjs
|
execjs
|
||||||
coffee-script-source (1.9.1.1)
|
coffee-script-source (1.10.0)
|
||||||
commonjs (0.2.7)
|
commonjs (0.2.7)
|
||||||
compass (1.0.3)
|
compass (1.0.3)
|
||||||
chunky_png (~> 1.2)
|
chunky_png (~> 1.2)
|
||||||
|
@ -63,7 +63,7 @@ GEM
|
||||||
eventmachine (>= 0.12.9)
|
eventmachine (>= 0.12.9)
|
||||||
http_parser.rb (~> 0.6.0)
|
http_parser.rb (~> 0.6.0)
|
||||||
erubis (2.7.0)
|
erubis (2.7.0)
|
||||||
eventmachine (1.0.8)
|
eventmachine (1.0.9)
|
||||||
execjs (2.6.0)
|
execjs (2.6.0)
|
||||||
ffi (1.9.10)
|
ffi (1.9.10)
|
||||||
git-version-bump (0.15.1)
|
git-version-bump (0.15.1)
|
||||||
|
@ -81,23 +81,23 @@ GEM
|
||||||
less (2.6.0)
|
less (2.6.0)
|
||||||
commonjs (~> 0.2.7)
|
commonjs (~> 0.2.7)
|
||||||
libv8 (3.16.14.13)
|
libv8 (3.16.14.13)
|
||||||
listen (3.0.3)
|
listen (3.0.5)
|
||||||
rb-fsevent (>= 0.9.3)
|
rb-fsevent (>= 0.9.3)
|
||||||
rb-inotify (>= 0.9)
|
rb-inotify (>= 0.9)
|
||||||
middleman (3.4.0)
|
middleman (3.4.1)
|
||||||
coffee-script (~> 2.2)
|
coffee-script (~> 2.2)
|
||||||
compass (>= 1.0.0, < 2.0.0)
|
compass (>= 1.0.0, < 2.0.0)
|
||||||
compass-import-once (= 1.0.5)
|
compass-import-once (= 1.0.5)
|
||||||
execjs (~> 2.0)
|
execjs (~> 2.0)
|
||||||
haml (>= 4.0.5)
|
haml (>= 4.0.5)
|
||||||
kramdown (~> 1.2)
|
kramdown (~> 1.2)
|
||||||
middleman-core (= 3.4.0)
|
middleman-core (= 3.4.1)
|
||||||
middleman-sprockets (>= 3.1.2)
|
middleman-sprockets (>= 3.1.2)
|
||||||
sass (>= 3.4.0, < 4.0)
|
sass (>= 3.4.0, < 4.0)
|
||||||
uglifier (~> 2.5)
|
uglifier (~> 2.5)
|
||||||
middleman-breadcrumbs (0.2.0)
|
middleman-breadcrumbs (0.2.0)
|
||||||
middleman (>= 3.3.5)
|
middleman (>= 3.3.5)
|
||||||
middleman-core (3.4.0)
|
middleman-core (3.4.1)
|
||||||
activesupport (~> 4.1)
|
activesupport (~> 4.1)
|
||||||
bundler (~> 1.1)
|
bundler (~> 1.1)
|
||||||
capybara (~> 2.4.4)
|
capybara (~> 2.4.4)
|
||||||
|
@ -109,7 +109,7 @@ GEM
|
||||||
rack (>= 1.4.5, < 2.0)
|
rack (>= 1.4.5, < 2.0)
|
||||||
thor (>= 0.15.2, < 2.0)
|
thor (>= 0.15.2, < 2.0)
|
||||||
tilt (~> 1.4.1, < 2.0)
|
tilt (~> 1.4.1, < 2.0)
|
||||||
middleman-livereload (3.4.3)
|
middleman-livereload (3.4.6)
|
||||||
em-websocket (~> 0.5.1)
|
em-websocket (~> 0.5.1)
|
||||||
middleman-core (>= 3.3)
|
middleman-core (>= 3.3)
|
||||||
rack-livereload (~> 0.3.15)
|
rack-livereload (~> 0.3.15)
|
||||||
|
@ -121,15 +121,17 @@ GEM
|
||||||
sprockets (~> 2.12.1)
|
sprockets (~> 2.12.1)
|
||||||
sprockets-helpers (~> 1.1.0)
|
sprockets-helpers (~> 1.1.0)
|
||||||
sprockets-sass (~> 1.3.0)
|
sprockets-sass (~> 1.3.0)
|
||||||
middleman-syntax (2.0.0)
|
middleman-syntax (2.1.0)
|
||||||
middleman-core (~> 3.2)
|
middleman-core (>= 3.2)
|
||||||
rouge (~> 1.0)
|
rouge (~> 1.0)
|
||||||
mime-types (2.6.2)
|
mime-types (3.0)
|
||||||
mini_portile (0.6.2)
|
mime-types-data (~> 3.2015)
|
||||||
minitest (5.8.1)
|
mime-types-data (3.2015.1120)
|
||||||
|
mini_portile2 (2.0.0)
|
||||||
|
minitest (5.8.3)
|
||||||
multi_json (1.11.2)
|
multi_json (1.11.2)
|
||||||
nokogiri (1.6.6.2)
|
nokogiri (1.6.7.1)
|
||||||
mini_portile (~> 0.6.0)
|
mini_portile2 (~> 2.0.0.rc2)
|
||||||
padrino-helpers (0.12.5)
|
padrino-helpers (0.12.5)
|
||||||
i18n (~> 0.6, >= 0.6.7)
|
i18n (~> 0.6, >= 0.6.7)
|
||||||
padrino-support (= 0.12.5)
|
padrino-support (= 0.12.5)
|
||||||
|
@ -148,13 +150,13 @@ GEM
|
||||||
rack-ssl-enforcer (0.2.9)
|
rack-ssl-enforcer (0.2.9)
|
||||||
rack-test (0.6.3)
|
rack-test (0.6.3)
|
||||||
rack (>= 1.0)
|
rack (>= 1.0)
|
||||||
rb-fsevent (0.9.6)
|
rb-fsevent (0.9.7)
|
||||||
rb-inotify (0.9.5)
|
rb-inotify (0.9.5)
|
||||||
ffi (>= 0.5.0)
|
ffi (>= 0.5.0)
|
||||||
redcarpet (3.3.3)
|
redcarpet (3.3.4)
|
||||||
ref (2.0.0)
|
ref (2.0.0)
|
||||||
rouge (1.10.1)
|
rouge (1.10.1)
|
||||||
sass (3.4.19)
|
sass (3.4.21)
|
||||||
sprockets (2.12.4)
|
sprockets (2.12.4)
|
||||||
hike (~> 1.2)
|
hike (~> 1.2)
|
||||||
multi_json (~> 1.0)
|
multi_json (~> 1.0)
|
||||||
|
|
|
@ -1 +0,0 @@
|
||||||
web: bundle exec thin start -p $PORT
|
|
|
@ -5,22 +5,22 @@ $script = <<SCRIPT
|
||||||
sudo apt-get -y update
|
sudo apt-get -y update
|
||||||
|
|
||||||
# RVM/Ruby
|
# RVM/Ruby
|
||||||
sudo apt-get -y install curl
|
sudo apt-get -qy install curl git libgmp3-dev
|
||||||
sudo apt-get -y install git
|
|
||||||
gpg --keyserver hkp://keys.gnupg.net --recv-keys D39DC0E3
|
gpg --keyserver hkp://keys.gnupg.net --recv-keys D39DC0E3
|
||||||
|
# Install rvm and the latest version of ruby
|
||||||
curl -sSL https://get.rvm.io | bash -s stable
|
curl -sSL https://get.rvm.io | bash -s stable
|
||||||
. ~/.bashrc
|
. ~/.bashrc
|
||||||
. ~/.bash_profile
|
. ~/.bash_profile
|
||||||
rvm install 2.0.0
|
rvm install ruby-2.2.2
|
||||||
rvm --default use 2.0.0
|
gem install bundler
|
||||||
|
|
||||||
# Middleman deps
|
# Middleman deps
|
||||||
cd /vagrant
|
cd /vagrant
|
||||||
bundle
|
make dev
|
||||||
SCRIPT
|
SCRIPT
|
||||||
|
|
||||||
Vagrant.configure(2) do |config|
|
Vagrant.configure(2) do |config|
|
||||||
config.vm.box = "chef/ubuntu-12.04"
|
config.vm.box = "bento/ubuntu-14.04"
|
||||||
config.vm.network "private_network", ip: "33.33.30.10"
|
config.vm.network "private_network", ip: "33.33.30.10"
|
||||||
config.vm.provision "shell", inline: $script, privileged: false
|
config.vm.provision "shell", inline: $script, privileged: false
|
||||||
config.vm.synced_folder ".", "/vagrant", type: "rsync"
|
config.vm.synced_folder ".", "/vagrant", type: "rsync"
|
||||||
|
|
|
@ -0,0 +1,41 @@
|
||||||
|
{
|
||||||
|
"variables": {
|
||||||
|
"aws_access_key_id": "{{ env `AWS_ACCESS_KEY_ID` }}",
|
||||||
|
"aws_secret_access_key": "{{ env `AWS_SECRET_ACCESS_KEY` }}",
|
||||||
|
"aws_region": "{{ env `AWS_REGION` }}",
|
||||||
|
"fastly_api_key": "{{ env `FASTLY_API_KEY` }}"
|
||||||
|
},
|
||||||
|
"builders": [
|
||||||
|
{
|
||||||
|
"type": "docker",
|
||||||
|
"image": "ruby:2.3-slim",
|
||||||
|
"commit": "true"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"provisioners": [
|
||||||
|
{
|
||||||
|
"type": "file",
|
||||||
|
"source": ".",
|
||||||
|
"destination": "/app"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "shell",
|
||||||
|
"environment_vars": [
|
||||||
|
"AWS_ACCESS_KEY_ID={{ user `aws_access_key_id` }}",
|
||||||
|
"AWS_SECRET_ACCESS_KEY={{ user `aws_secret_access_key` }}",
|
||||||
|
"AWS_REGION={{ user `aws_region` }}",
|
||||||
|
"FASTLY_API_KEY={{ user `fastly_api_key` }}"
|
||||||
|
],
|
||||||
|
"inline": [
|
||||||
|
"apt-get update",
|
||||||
|
"apt-get install -y build-essential curl git libffi-dev s3cmd wget",
|
||||||
|
"cd /app",
|
||||||
|
|
||||||
|
"bundle check || bundle install --jobs 7",
|
||||||
|
"bundle exec middleman build",
|
||||||
|
|
||||||
|
"/bin/bash ./scripts/deploy.sh"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
|
@ -0,0 +1,88 @@
|
||||||
|
#!/bin/bash
|
||||||
|
set -e
|
||||||
|
|
||||||
|
PROJECT="packer"
|
||||||
|
PROJECT_URL="www.packer.io"
|
||||||
|
FASTLY_SERVICE_ID="7GrxRJP3PVBuqQbyxYQ0MV"
|
||||||
|
|
||||||
|
# Ensure the proper AWS environment variables are set
|
||||||
|
if [ -z "$AWS_ACCESS_KEY_ID" ]; then
|
||||||
|
echo "Missing AWS_ACCESS_KEY_ID!"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -z "$AWS_SECRET_ACCESS_KEY" ]; then
|
||||||
|
echo "Missing AWS_SECRET_ACCESS_KEY!"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Ensure the proper Fastly keys are set
|
||||||
|
if [ -z "$FASTLY_API_KEY" ]; then
|
||||||
|
echo "Missing FASTLY_API_KEY!"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Ensure we have s3cmd installed
|
||||||
|
if ! command -v "s3cmd" >/dev/null 2>&1; then
|
||||||
|
echo "Missing s3cmd!"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Get the parent directory of where this script is and change into our website
|
||||||
|
# directory
|
||||||
|
SOURCE="${BASH_SOURCE[0]}"
|
||||||
|
while [ -h "$SOURCE" ] ; do SOURCE="$(readlink "$SOURCE")"; done
|
||||||
|
DIR="$(cd -P "$( dirname "$SOURCE" )/.." && pwd)"
|
||||||
|
|
||||||
|
# Delete any .DS_Store files for our OS X friends.
|
||||||
|
find "$DIR" -type f -name '.DS_Store' -delete
|
||||||
|
|
||||||
|
# Upload the files to S3 - we disable mime-type detection by the python library
|
||||||
|
# and just guess from the file extension because it's surprisingly more
|
||||||
|
# accurate, especially for CSS and javascript. We also tag the uploaded files
|
||||||
|
# with the proper Surrogate-Key, which we will later purge in our API call to
|
||||||
|
# Fastly.
|
||||||
|
if [ -z "$NO_UPLOAD" ]; then
|
||||||
|
echo "Uploading to S3..."
|
||||||
|
|
||||||
|
# Check that the site has been built
|
||||||
|
if [ ! -d "$DIR/build" ]; then
|
||||||
|
echo "Missing compiled website! Run 'make build' to compile!"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
s3cmd \
|
||||||
|
--quiet \
|
||||||
|
--delete-removed \
|
||||||
|
--guess-mime-type \
|
||||||
|
--no-mime-magic \
|
||||||
|
--acl-public \
|
||||||
|
--recursive \
|
||||||
|
--add-header="Cache-Control: max-age=31536000" \
|
||||||
|
--add-header="x-amz-meta-surrogate-key: site-$PROJECT" \
|
||||||
|
sync "$DIR/build/" "s3://hc-sites/$PROJECT/latest/"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Perform a soft-purge of the surrogate key.
|
||||||
|
if [ -z "$NO_PURGE" ]; then
|
||||||
|
echo "Purging Fastly cache..."
|
||||||
|
curl \
|
||||||
|
--fail \
|
||||||
|
--silent \
|
||||||
|
--output /dev/null \
|
||||||
|
--request "POST" \
|
||||||
|
--header "Accept: application/json" \
|
||||||
|
--header "Fastly-Key: $FASTLY_API_KEY" \
|
||||||
|
--header "Fastly-Soft-Purge: 1" \
|
||||||
|
"https://api.fastly.com/service/$FASTLY_SERVICE_ID/purge/site-$PROJECT"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Warm the cache with recursive wget.
|
||||||
|
if [ -z "$NO_WARM" ]; then
|
||||||
|
echo "Warming Fastly cache..."
|
||||||
|
wget \
|
||||||
|
--recursive \
|
||||||
|
--delete-after \
|
||||||
|
--quiet \
|
||||||
|
"https://$PROJECT_URL/"
|
||||||
|
fi
|
Before Width: | Height: | Size: 524 B After Width: | Height: | Size: 346 B |
Before Width: | Height: | Size: 506 B After Width: | Height: | Size: 338 B |
Before Width: | Height: | Size: 597 B After Width: | Height: | Size: 596 B |
Before Width: | Height: | Size: 143 KiB After Width: | Height: | Size: 85 KiB |
Before Width: | Height: | Size: 1.2 KiB After Width: | Height: | Size: 1.2 KiB |
Before Width: | Height: | Size: 96 KiB After Width: | Height: | Size: 96 KiB |
Before Width: | Height: | Size: 978 B After Width: | Height: | Size: 779 B |
Before Width: | Height: | Size: 2.3 KiB After Width: | Height: | Size: 1.7 KiB |
Before Width: | Height: | Size: 451 B After Width: | Height: | Size: 390 B |
Before Width: | Height: | Size: 1.1 KiB After Width: | Height: | Size: 1.1 KiB |
Before Width: | Height: | Size: 40 KiB After Width: | Height: | Size: 22 KiB |
Before Width: | Height: | Size: 158 KiB After Width: | Height: | Size: 155 KiB |
Before Width: | Height: | Size: 1.1 KiB After Width: | Height: | Size: 1.1 KiB |
Before Width: | Height: | Size: 2.3 KiB After Width: | Height: | Size: 2.2 KiB |
Before Width: | Height: | Size: 2.5 KiB After Width: | Height: | Size: 2.5 KiB |
Before Width: | Height: | Size: 1.9 KiB After Width: | Height: | Size: 1.9 KiB |
Before Width: | Height: | Size: 3.5 KiB After Width: | Height: | Size: 3.5 KiB |
Before Width: | Height: | Size: 6.2 KiB After Width: | Height: | Size: 6.2 KiB |
Before Width: | Height: | Size: 2.3 KiB After Width: | Height: | Size: 2.3 KiB |
|
@ -14,7 +14,7 @@ dedicated users willing to help through various mediums.
|
||||||
**IRC:** `#packer-tool` on Freenode.
|
**IRC:** `#packer-tool` on Freenode.
|
||||||
|
|
||||||
**Mailing List:** [Packer Google
|
**Mailing List:** [Packer Google
|
||||||
Group](http://groups.google.com/group/packer-tool)
|
Group](https://groups.google.com/group/packer-tool)
|
||||||
|
|
||||||
**Bug Tracker:** [Issue tracker on
|
**Bug Tracker:** [Issue tracker on
|
||||||
GitHub](https://github.com/mitchellh/packer/issues). Please only use this for
|
GitHub](https://github.com/mitchellh/packer/issues). Please only use this for
|
||||||
|
@ -31,14 +31,14 @@ list as contributors come and go.
|
||||||
|
|
||||||
<div class="person">
|
<div class="person">
|
||||||
|
|
||||||
<img class="pull-left" src="http://www.gravatar.com/avatar/54079122b67de9677c1f93933ce8b63a.png?s=125">
|
<img class="pull-left" src="https://www.gravatar.com/avatar/54079122b67de9677c1f93933ce8b63a.png?s=125">
|
||||||
<div class="bio">
|
<div class="bio">
|
||||||
<h3>Mitchell Hashimoto (<a href="https://github.com/mitchellh">@mitchellh</a>)</h3>
|
<h3>Mitchell Hashimoto (<a href="https://github.com/mitchellh">@mitchellh</a>)</h3>
|
||||||
<p>
|
<p>
|
||||||
Mitchell Hashimoto is the creator of Packer. He developed the
|
Mitchell Hashimoto is the creator of Packer. He developed the
|
||||||
core of Packer as well as the Amazon, VirtualBox, and VMware
|
core of Packer as well as the Amazon, VirtualBox, and VMware
|
||||||
builders. In addition to Packer, Mitchell is the creator of
|
builders. In addition to Packer, Mitchell is the creator of
|
||||||
<a href="http://www.vagrantup.com">Vagrant</a>. He is self
|
<a href="https://www.vagrantup.com">Vagrant</a>. He is self
|
||||||
described as "automation obsessed."
|
described as "automation obsessed."
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
|
@ -47,7 +47,7 @@ list as contributors come and go.
|
||||||
|
|
||||||
<div class="person">
|
<div class="person">
|
||||||
|
|
||||||
<img class="pull-left" src="http://www.gravatar.com/avatar/2acc31dd6370a54b18f6755cd0710ce6.png?s=125">
|
<img class="pull-left" src="https://www.gravatar.com/avatar/2acc31dd6370a54b18f6755cd0710ce6.png?s=125">
|
||||||
<div class="bio">
|
<div class="bio">
|
||||||
<h3>Jack Pearkes (<a href="https://github.com/pearkes">@pearkes</a>)</h3>
|
<h3>Jack Pearkes (<a href="https://github.com/pearkes">@pearkes</a>)</h3>
|
||||||
<p>
|
<p>
|
||||||
|
@ -60,7 +60,7 @@ list as contributors come and go.
|
||||||
|
|
||||||
<div class="person">
|
<div class="person">
|
||||||
|
|
||||||
<img class="pull-left" src="http://www.gravatar.com/avatar/2f7fc9cb7558e3ea48f5a86fa90a78da.png?s=125">
|
<img class="pull-left" src="https://www.gravatar.com/avatar/2f7fc9cb7558e3ea48f5a86fa90a78da.png?s=125">
|
||||||
<div class="bio">
|
<div class="bio">
|
||||||
<h3>Mark Peek (<a href="https://github.com/markpeek">@markpeek</a>)</h3>
|
<h3>Mark Peek (<a href="https://github.com/markpeek">@markpeek</a>)</h3>
|
||||||
<p>
|
<p>
|
||||||
|
@ -75,7 +75,7 @@ list as contributors come and go.
|
||||||
|
|
||||||
<div class="person">
|
<div class="person">
|
||||||
|
|
||||||
<img class="pull-left" src="http://www.gravatar.com/avatar/1fca64df3d7db1e2f258a8956d2b0aff.png?s=125">
|
<img class="pull-left" src="https://www.gravatar.com/avatar/1fca64df3d7db1e2f258a8956d2b0aff.png?s=125">
|
||||||
<div class="bio">
|
<div class="bio">
|
||||||
<h3>Ross Smith II (<a href="https://github.com/rasa" target="_blank">@rasa</a>)</h3>
|
<h3>Ross Smith II (<a href="https://github.com/rasa" target="_blank">@rasa</a>)</h3>
|
||||||
<p>
|
<p>
|
||||||
|
@ -90,7 +90,7 @@ open source enthusiast, published author, and freelance consultant.
|
||||||
|
|
||||||
<div class="person">
|
<div class="person">
|
||||||
|
|
||||||
<img class="pull-left" src="http://www.gravatar.com/avatar/c9f6bf7b5b865012be5eded656ebed7d.png?s=125">
|
<img class="pull-left" src="https://www.gravatar.com/avatar/c9f6bf7b5b865012be5eded656ebed7d.png?s=125">
|
||||||
<div class="bio">
|
<div class="bio">
|
||||||
<h3>Rickard von Essen<br/>(<a href="https://github.com/rickard-von-essen" target="_blank">@rickard-von-essen</a>)</h3>
|
<h3>Rickard von Essen<br/>(<a href="https://github.com/rickard-von-essen" target="_blank">@rickard-von-essen</a>)</h3>
|
||||||
<p>
|
<p>
|
||||||
|
|
|
@ -16,7 +16,7 @@ The `amazon-chroot` Packer builder is able to create Amazon AMIs backed by an
|
||||||
EBS volume as the root device. For more information on the difference between
|
EBS volume as the root device. For more information on the difference between
|
||||||
instance storage and EBS-backed instances, see the ["storage for the root
|
instance storage and EBS-backed instances, see the ["storage for the root
|
||||||
device" section in the EC2
|
device" section in the EC2
|
||||||
documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device).
|
documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device).
|
||||||
|
|
||||||
The difference between this builder and the `amazon-ebs` builder is that this
|
The difference between this builder and the `amazon-ebs` builder is that this
|
||||||
builder is able to build an EBS-backed AMI without launching a new EC2 instance.
|
builder is able to build an EBS-backed AMI without launching a new EC2 instance.
|
||||||
|
@ -34,7 +34,7 @@ account, it is up to you to use, delete, etc. the AMI.
|
||||||
|
|
||||||
This builder works by creating a new EBS volume from an existing source AMI and
|
This builder works by creating a new EBS volume from an existing source AMI and
|
||||||
attaching it into an already-running EC2 instance. Once attached, a
|
attaching it into an already-running EC2 instance. Once attached, a
|
||||||
[chroot](http://en.wikipedia.org/wiki/Chroot) is used to provision the system
|
[chroot](https://en.wikipedia.org/wiki/Chroot) is used to provision the system
|
||||||
within that volume. After provisioning, the volume is detached, snapshotted, and
|
within that volume. After provisioning, the volume is detached, snapshotted, and
|
||||||
an AMI is made.
|
an AMI is made.
|
||||||
|
|
||||||
|
|
|
@ -13,10 +13,10 @@ page_title: 'Amazon AMI Builder (EBS backed)'
|
||||||
Type: `amazon-ebs`
|
Type: `amazon-ebs`
|
||||||
|
|
||||||
The `amazon-ebs` Packer builder is able to create Amazon AMIs backed by EBS
|
The `amazon-ebs` Packer builder is able to create Amazon AMIs backed by EBS
|
||||||
volumes for use in [EC2](http://aws.amazon.com/ec2/). For more information on
|
volumes for use in [EC2](https://aws.amazon.com/ec2/). For more information on
|
||||||
the difference between EBS-backed instances and instance-store backed instances,
|
the difference between EBS-backed instances and instance-store backed instances,
|
||||||
see the ["storage for the root device" section in the EC2
|
see the ["storage for the root device" section in the EC2
|
||||||
documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device).
|
documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device).
|
||||||
|
|
||||||
This builder builds an AMI by launching an EC2 instance from a source AMI,
|
This builder builds an AMI by launching an EC2 instance from a source AMI,
|
||||||
provisioning that running machine, and then creating an AMI from that machine.
|
provisioning that running machine, and then creating an AMI from that machine.
|
||||||
|
@ -72,7 +72,7 @@ builder.
|
||||||
example, "/dev/sdh" or "xvdh"). Required when specifying `volume_size`.
|
example, "/dev/sdh" or "xvdh"). Required when specifying `volume_size`.
|
||||||
- `virtual_name` (string) - The virtual device name. See the documentation on
|
- `virtual_name` (string) - The virtual device name. See the documentation on
|
||||||
[Block Device
|
[Block Device
|
||||||
Mapping](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html)
|
Mapping](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html)
|
||||||
for more information
|
for more information
|
||||||
- `snapshot_id` (string) - The ID of the snapshot
|
- `snapshot_id` (string) - The ID of the snapshot
|
||||||
- `volume_type` (string) - The volume type. gp2 for General Purpose (SSD)
|
- `volume_type` (string) - The volume type. gp2 for General Purpose (SSD)
|
||||||
|
@ -87,7 +87,7 @@ builder.
|
||||||
block device mapping of the AMI
|
block device mapping of the AMI
|
||||||
- `iops` (integer) - The number of I/O operations per second (IOPS) that the
|
- `iops` (integer) - The number of I/O operations per second (IOPS) that the
|
||||||
volume supports. See the documentation on
|
volume supports. See the documentation on
|
||||||
[IOPs](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html)
|
[IOPs](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html)
|
||||||
for more information
|
for more information
|
||||||
- `ami_description` (string) - The description to set for the
|
- `ami_description` (string) - The description to set for the
|
||||||
resulting AMI(s). By default this description is empty.
|
resulting AMI(s). By default this description is empty.
|
||||||
|
@ -117,7 +117,7 @@ builder.
|
||||||
instance in. Leave this empty to allow Amazon to auto-assign.
|
instance in. Leave this empty to allow Amazon to auto-assign.
|
||||||
|
|
||||||
- `ebs_optimized` (boolean) - Mark instance as [EBS
|
- `ebs_optimized` (boolean) - Mark instance as [EBS
|
||||||
Optimized](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html).
|
Optimized](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html).
|
||||||
Default `false`.
|
Default `false`.
|
||||||
|
|
||||||
- `enhanced_networking` (boolean) - Enable enhanced
|
- `enhanced_networking` (boolean) - Enable enhanced
|
||||||
|
@ -128,7 +128,7 @@ builder.
|
||||||
AMI if one with the same name already exists. Default `false`.
|
AMI if one with the same name already exists. Default `false`.
|
||||||
|
|
||||||
- `iam_instance_profile` (string) - The name of an [IAM instance
|
- `iam_instance_profile` (string) - The name of an [IAM instance
|
||||||
profile](http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html)
|
profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html)
|
||||||
to launch the EC2 instance with.
|
to launch the EC2 instance with.
|
||||||
|
|
||||||
- `launch_block_device_mappings` (array of block device mappings) - Add the
|
- `launch_block_device_mappings` (array of block device mappings) - Add the
|
||||||
|
@ -208,8 +208,8 @@ Here is a basic example. It is completely valid except for the access keys:
|
||||||
"access_key": "YOUR KEY HERE",
|
"access_key": "YOUR KEY HERE",
|
||||||
"secret_key": "YOUR SECRET KEY HERE",
|
"secret_key": "YOUR SECRET KEY HERE",
|
||||||
"region": "us-east-1",
|
"region": "us-east-1",
|
||||||
"source_ami": "ami-de0d9eb7",
|
"source_ami": "ami-72b9e018",
|
||||||
"instance_type": "t1.micro",
|
"instance_type": "t2.micro",
|
||||||
"ssh_username": "ubuntu",
|
"ssh_username": "ubuntu",
|
||||||
"ami_name": "packer-quick-start {{timestamp}}"
|
"ami_name": "packer-quick-start {{timestamp}}"
|
||||||
}
|
}
|
||||||
|
@ -237,8 +237,8 @@ the /dev/sdb and /dev/sdc block device mappings to the finished AMI.
|
||||||
"access_key": "YOUR KEY HERE",
|
"access_key": "YOUR KEY HERE",
|
||||||
"secret_key": "YOUR SECRET KEY HERE",
|
"secret_key": "YOUR SECRET KEY HERE",
|
||||||
"region": "us-east-1",
|
"region": "us-east-1",
|
||||||
"source_ami": "ami-de0d9eb7",
|
"source_ami": "ami-72b9e018",
|
||||||
"instance_type": "t1.micro",
|
"instance_type": "t2.micro",
|
||||||
"ssh_username": "ubuntu",
|
"ssh_username": "ubuntu",
|
||||||
"ami_name": "packer-quick-start {{timestamp}}",
|
"ami_name": "packer-quick-start {{timestamp}}",
|
||||||
"ami_block_device_mappings": [
|
"ami_block_device_mappings": [
|
||||||
|
@ -265,8 +265,8 @@ Here is an example using the optional AMI tags. This will add the tags
|
||||||
"access_key": "YOUR KEY HERE",
|
"access_key": "YOUR KEY HERE",
|
||||||
"secret_key": "YOUR SECRET KEY HERE",
|
"secret_key": "YOUR SECRET KEY HERE",
|
||||||
"region": "us-east-1",
|
"region": "us-east-1",
|
||||||
"source_ami": "ami-de0d9eb7",
|
"source_ami": "ami-72b9e018",
|
||||||
"instance_type": "t1.micro",
|
"instance_type": "t2.micro",
|
||||||
"ssh_username": "ubuntu",
|
"ssh_username": "ubuntu",
|
||||||
"ami_name": "packer-quick-start {{timestamp}}",
|
"ami_name": "packer-quick-start {{timestamp}}",
|
||||||
"tags": {
|
"tags": {
|
||||||
|
|
|
@ -16,7 +16,7 @@ The `amazon-instance` Packer builder is able to create Amazon AMIs backed by
|
||||||
instance storage as the root device. For more information on the difference
|
instance storage as the root device. For more information on the difference
|
||||||
between instance storage and EBS-backed instances, see the ["storage for the
|
between instance storage and EBS-backed instances, see the ["storage for the
|
||||||
root device" section in the EC2
|
root device" section in the EC2
|
||||||
documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device).
|
documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device).
|
||||||
|
|
||||||
This builder builds an AMI by launching an EC2 instance from an existing
|
This builder builds an AMI by launching an EC2 instance from an existing
|
||||||
instance-storage backed AMI, provisioning that running machine, and then
|
instance-storage backed AMI, provisioning that running machine, and then
|
||||||
|
@ -29,7 +29,7 @@ The builder does *not* manage AMIs. Once it creates an AMI and stores it in your
|
||||||
account, it is up to you to use, delete, etc. the AMI.
|
account, it is up to you to use, delete, etc. the AMI.
|
||||||
|
|
||||||
-> **Note** This builder requires that the [Amazon EC2 AMI
|
-> **Note** This builder requires that the [Amazon EC2 AMI
|
||||||
Tools](http://aws.amazon.com/developertools/368) are installed onto the machine.
|
Tools](https://aws.amazon.com/developertools/368) are installed onto the machine.
|
||||||
This can be done within a provisioner, but must be done before the builder
|
This can be done within a provisioner, but must be done before the builder
|
||||||
finishes running.
|
finishes running.
|
||||||
|
|
||||||
|
@ -93,7 +93,7 @@ builder.
|
||||||
example, "/dev/sdh" or "xvdh"). Required when specifying `volume_size`.
|
example, "/dev/sdh" or "xvdh"). Required when specifying `volume_size`.
|
||||||
- `virtual_name` (string) - The virtual device name. See the documentation on
|
- `virtual_name` (string) - The virtual device name. See the documentation on
|
||||||
[Block Device
|
[Block Device
|
||||||
Mapping](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html)
|
Mapping](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html)
|
||||||
for more information
|
for more information
|
||||||
- `snapshot_id` (string) - The ID of the snapshot
|
- `snapshot_id` (string) - The ID of the snapshot
|
||||||
- `volume_type` (string) - The volume type. gp2 for General Purpose (SSD)
|
- `volume_type` (string) - The volume type. gp2 for General Purpose (SSD)
|
||||||
|
@ -108,7 +108,7 @@ builder.
|
||||||
block device mapping of the AMI
|
block device mapping of the AMI
|
||||||
- `iops` (integer) - The number of I/O operations per second (IOPS) that the
|
- `iops` (integer) - The number of I/O operations per second (IOPS) that the
|
||||||
volume supports. See the documentation on
|
volume supports. See the documentation on
|
||||||
[IOPs](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html)
|
[IOPs](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html)
|
||||||
for more information
|
for more information
|
||||||
- `ami_description` (string) - The description to set for the
|
- `ami_description` (string) - The description to set for the
|
||||||
resulting AMI(s). By default this description is empty.
|
resulting AMI(s). By default this description is empty.
|
||||||
|
@ -158,7 +158,7 @@ builder.
|
||||||
the "custom bundle commands" section below for more information.
|
the "custom bundle commands" section below for more information.
|
||||||
|
|
||||||
- `ebs_optimized` (boolean) - Mark instance as [EBS
|
- `ebs_optimized` (boolean) - Mark instance as [EBS
|
||||||
Optimized](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html).
|
Optimized](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html).
|
||||||
Default `false`.
|
Default `false`.
|
||||||
|
|
||||||
- `enhanced_networking` (boolean) - Enable enhanced
|
- `enhanced_networking` (boolean) - Enable enhanced
|
||||||
|
@ -169,7 +169,7 @@ builder.
|
||||||
AMI if one with the same name already exists. Default `false`.
|
AMI if one with the same name already exists. Default `false`.
|
||||||
|
|
||||||
- `iam_instance_profile` (string) - The name of an [IAM instance
|
- `iam_instance_profile` (string) - The name of an [IAM instance
|
||||||
profile](http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html)
|
profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html)
|
||||||
to launch the EC2 instance with.
|
to launch the EC2 instance with.
|
||||||
|
|
||||||
- `launch_block_device_mappings` (array of block device mappings) - Add the
|
- `launch_block_device_mappings` (array of block device mappings) - Add the
|
||||||
|
|
|
@ -23,7 +23,7 @@ Packer supports the following builders at the moment:
|
||||||
|
|
||||||
- [amazon-chroot](/docs/builders/amazon-chroot.html) - Create EBS-backed AMIs
|
- [amazon-chroot](/docs/builders/amazon-chroot.html) - Create EBS-backed AMIs
|
||||||
from an existing EC2 instance by mounting the root device and using a
|
from an existing EC2 instance by mounting the root device and using a
|
||||||
[Chroot](http://en.wikipedia.org/wiki/Chroot) environment to provision
|
[Chroot](https://en.wikipedia.org/wiki/Chroot) environment to provision
|
||||||
that device. This is an **advanced builder and should not be used by
|
that device. This is an **advanced builder and should not be used by
|
||||||
newcomers**. However, it is also the fastest way to build an EBS-backed AMI
|
newcomers**. However, it is also the fastest way to build an EBS-backed AMI
|
||||||
since no new EC2 instance needs to be launched.
|
since no new EC2 instance needs to be launched.
|
||||||
|
@ -49,7 +49,7 @@ Credentials are resolved in the following order:
|
||||||
1. Values hard-coded in the packer template are always authoritative.
|
1. Values hard-coded in the packer template are always authoritative.
|
||||||
2. *Variables* in the packer template may be resolved from command-line flags
|
2. *Variables* in the packer template may be resolved from command-line flags
|
||||||
or from environment variables. Please read about [User
|
or from environment variables. Please read about [User
|
||||||
Variables](https://packer.io/docs/templates/user-variables.html)
|
Variables](https://www.packer.io/docs/templates/user-variables.html)
|
||||||
for details.
|
for details.
|
||||||
3. If no credentials are found, packer falls back to automatic lookup.
|
3. If no credentials are found, packer falls back to automatic lookup.
|
||||||
|
|
||||||
|
@ -63,7 +63,7 @@ following steps:
|
||||||
- First `AWS_SECRET_ACCESS_KEY`, then `AWS_SECRET_KEY`
|
- First `AWS_SECRET_ACCESS_KEY`, then `AWS_SECRET_KEY`
|
||||||
|
|
||||||
2. Look for [local AWS configuration
|
2. Look for [local AWS configuration
|
||||||
files](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files)
|
files](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files)
|
||||||
- First `~/.aws/credentials`
|
- First `~/.aws/credentials`
|
||||||
- Next based on `AWS_PROFILE`
|
- Next based on `AWS_PROFILE`
|
||||||
|
|
||||||
|
@ -80,7 +80,7 @@ packer build on your workstation, in Atlas, or on another build server.
|
||||||
## Using an IAM Instance Profile
|
## Using an IAM Instance Profile
|
||||||
|
|
||||||
If AWS keys are not specified in the template, a
|
If AWS keys are not specified in the template, a
|
||||||
[credentials](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files)
|
[credentials](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files)
|
||||||
file or through environment variables Packer will use credentials provided by
|
file or through environment variables Packer will use credentials provided by
|
||||||
the instance's IAM profile, if it has one.
|
the instance's IAM profile, if it has one.
|
||||||
|
|
||||||
|
@ -97,6 +97,7 @@ Packer to work:
|
||||||
"ec2:DeleteVolume",
|
"ec2:DeleteVolume",
|
||||||
"ec2:CreateKeypair",
|
"ec2:CreateKeypair",
|
||||||
"ec2:DeleteKeypair",
|
"ec2:DeleteKeypair",
|
||||||
|
"ec2:DescribeSubnets",
|
||||||
"ec2:CreateSecurityGroup",
|
"ec2:CreateSecurityGroup",
|
||||||
"ec2:DeleteSecurityGroup",
|
"ec2:DeleteSecurityGroup",
|
||||||
"ec2:AuthorizeSecurityGroupIngress",
|
"ec2:AuthorizeSecurityGroupIngress",
|
||||||
|
@ -131,7 +132,7 @@ roles, you may encounter an error like this one:
|
||||||
==> amazon-ebs: Error launching source instance: You are not authorized to perform this operation.
|
==> amazon-ebs: Error launching source instance: You are not authorized to perform this operation.
|
||||||
|
|
||||||
You can read more about why this happens on the [Amazon Security
|
You can read more about why this happens on the [Amazon Security
|
||||||
Blog](http://blogs.aws.amazon.com/security/post/Tx3M0IFB5XBOCQX/Granting-Permission-to-Launch-EC2-Instances-with-IAM-Roles-PassRole-Permission).
|
Blog](https://blogs.aws.amazon.com/security/post/Tx3M0IFB5XBOCQX/Granting-Permission-to-Launch-EC2-Instances-with-IAM-Roles-PassRole-Permission).
|
||||||
The example policy below may help packer work with IAM roles. Note that this
|
The example policy below may help packer work with IAM roles. Note that this
|
||||||
example provides more than the minimal set of permissions needed for packer to
|
example provides more than the minimal set of permissions needed for packer to
|
||||||
work, but specifics will depend on your use-case.
|
work, but specifics will depend on your use-case.
|
||||||
|
|
|
@ -14,7 +14,7 @@ page_title: DigitalOcean Builder
|
||||||
Type: `digitalocean`
|
Type: `digitalocean`
|
||||||
|
|
||||||
The `digitalocean` Packer builder is able to create new images for use with
|
The `digitalocean` Packer builder is able to create new images for use with
|
||||||
[DigitalOcean](http://www.digitalocean.com). The builder takes a source image,
|
[DigitalOcean](https://www.digitalocean.com). The builder takes a source image,
|
||||||
runs any provisioning necessary on the image after launching it, then snapshots
|
runs any provisioning necessary on the image after launching it, then snapshots
|
||||||
it into a reusable image. This reusable image can then be used as the foundation
|
it into a reusable image. This reusable image can then be used as the foundation
|
||||||
of new servers that are launched within DigitalOcean.
|
of new servers that are launched within DigitalOcean.
|
||||||
|
|
|
@ -11,7 +11,7 @@ page_title: Docker Builder
|
||||||
|
|
||||||
Type: `docker`
|
Type: `docker`
|
||||||
|
|
||||||
The `docker` Packer builder builds [Docker](http://www.docker.io) images using
|
The `docker` Packer builder builds [Docker](https://www.docker.io) images using
|
||||||
Docker. The builder starts a Docker container, runs provisioners within this
|
Docker. The builder starts a Docker container, runs provisioners within this
|
||||||
container, then exports the container for reuse or commits the image.
|
container, then exports the container for reuse or commits the image.
|
||||||
|
|
||||||
|
@ -26,7 +26,7 @@ the section on [Dockerfiles](#toc_8).
|
||||||
The Docker builder must run on a machine that has Docker installed. Therefore
|
The Docker builder must run on a machine that has Docker installed. Therefore
|
||||||
the builder only works on machines that support Docker (modern Linux machines).
|
the builder only works on machines that support Docker (modern Linux machines).
|
||||||
If you want to use Packer to build Docker containers on another platform, use
|
If you want to use Packer to build Docker containers on another platform, use
|
||||||
[Vagrant](http://www.vagrantup.com) to start a Linux environment, then run
|
[Vagrant](https://www.vagrantup.com) to start a Linux environment, then run
|
||||||
Packer within that environment.
|
Packer within that environment.
|
||||||
|
|
||||||
## Basic Example: Export
|
## Basic Example: Export
|
||||||
|
@ -75,7 +75,7 @@ You must specify (only) one of `commit`, `discard`, or `export_path`.
|
||||||
|
|
||||||
- `discard` (boolean) - Throw away the container when the build is complete.
|
- `discard` (boolean) - Throw away the container when the build is complete.
|
||||||
This is useful for the [artifice
|
This is useful for the [artifice
|
||||||
post-processor](https://packer.io/docs/post-processors/artifice.html).
|
post-processor](https://www.packer.io/docs/post-processors/artifice.html).
|
||||||
|
|
||||||
- `export_path` (string) - The path where the final container will be exported
|
- `export_path` (string) - The path where the final container will be exported
|
||||||
as a tar file.
|
as a tar file.
|
||||||
|
@ -214,6 +214,42 @@ nearly-identical sequence definitions, as demonstrated by the example below:
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
<span id="amazon-ec2-container-registry"></span>
|
||||||
|
|
||||||
|
## Amazon EC2 Container Registry
|
||||||
|
|
||||||
|
Packer can tag and push images for use in
|
||||||
|
[Amazon EC2 Container Registry](https://aws.amazon.com/ecr/). The post
|
||||||
|
processors work as described above and example configuration properties are
|
||||||
|
shown below:
|
||||||
|
|
||||||
|
``` {.javascript}
|
||||||
|
{
|
||||||
|
"post-processors": [
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"type": "docker-tag",
|
||||||
|
"repository": "12345.dkr.ecr.us-east-1.amazonaws.com/packer",
|
||||||
|
"tag": "0.7"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "docker-push",
|
||||||
|
"login": true,
|
||||||
|
"login_email": "none",
|
||||||
|
"login_username": "AWS",
|
||||||
|
"login_password": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
|
||||||
|
"login_server": "https://12345.dkr.ecr.us-east-1.amazonaws.com/"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
See the
|
||||||
|
[AWS documentation](https://docs.aws.amazon.com/AmazonECR/latest/userguide/Registries.html)
|
||||||
|
for steps to obtain Amazon ECR registry credentials.
|
||||||
|
|
||||||
|
|
||||||
## Dockerfiles
|
## Dockerfiles
|
||||||
|
|
||||||
This builder allows you to build Docker images *without* Dockerfiles.
|
This builder allows you to build Docker images *without* Dockerfiles.
|
||||||
|
|
|
@ -118,6 +118,9 @@ builder.
|
||||||
Not required if you run Packer on a GCE instance with a service account.
|
Not required if you run Packer on a GCE instance with a service account.
|
||||||
Instructions for creating file or using service accounts are above.
|
Instructions for creating file or using service accounts are above.
|
||||||
|
|
||||||
|
- `address` (string) - The name of a pre-allocated static external IP address.
|
||||||
|
Note, must be the name and not the actual IP address.
|
||||||
|
|
||||||
- `disk_size` (integer) - The size of the disk in GB. This defaults to `10`,
|
- `disk_size` (integer) - The size of the disk in GB. This defaults to `10`,
|
||||||
which is 10GB.
|
which is 10GB.
|
||||||
|
|
||||||
|
|
|
@ -11,7 +11,7 @@ page_title: 'Parallels Builder (from an ISO)'
|
||||||
Type: `parallels-iso`
|
Type: `parallels-iso`
|
||||||
|
|
||||||
The Parallels Packer builder is able to create [Parallels Desktop for
|
The Parallels Packer builder is able to create [Parallels Desktop for
|
||||||
Mac](http://www.parallels.com/products/desktop/) virtual machines and export
|
Mac](https://www.parallels.com/products/desktop/) virtual machines and export
|
||||||
them in the PVM format, starting from an ISO image.
|
them in the PVM format, starting from an ISO image.
|
||||||
|
|
||||||
The builder builds a virtual machine by creating a new virtual machine from
|
The builder builds a virtual machine by creating a new virtual machine from
|
||||||
|
|
|
@ -12,7 +12,7 @@ page_title: 'Parallels Builder (from a PVM)'
|
||||||
Type: `parallels-pvm`
|
Type: `parallels-pvm`
|
||||||
|
|
||||||
This Parallels builder is able to create [Parallels Desktop for
|
This Parallels builder is able to create [Parallels Desktop for
|
||||||
Mac](http://www.parallels.com/products/desktop/) virtual machines and export
|
Mac](https://www.parallels.com/products/desktop/) virtual machines and export
|
||||||
them in the PVM format, starting from an existing PVM (exported virtual machine
|
them in the PVM format, starting from an existing PVM (exported virtual machine
|
||||||
image).
|
image).
|
||||||
|
|
||||||
|
|
|
@ -9,7 +9,7 @@ page_title: Parallels Builder
|
||||||
# Parallels Builder
|
# Parallels Builder
|
||||||
|
|
||||||
The Parallels Packer builder is able to create [Parallels Desktop for
|
The Parallels Packer builder is able to create [Parallels Desktop for
|
||||||
Mac](http://www.parallels.com/products/desktop/) virtual machines and export
|
Mac](https://www.parallels.com/products/desktop/) virtual machines and export
|
||||||
them in the PVM format.
|
them in the PVM format.
|
||||||
|
|
||||||
Packer actually comes with multiple builders able to create Parallels machines,
|
Packer actually comes with multiple builders able to create Parallels machines,
|
||||||
|
@ -30,8 +30,8 @@ the following Parallels builders:
|
||||||
## Requirements
|
## Requirements
|
||||||
|
|
||||||
In addition to [Parallels Desktop for
|
In addition to [Parallels Desktop for
|
||||||
Mac](http://www.parallels.com/products/desktop/) this requires the [Parallels
|
Mac](https://www.parallels.com/products/desktop/) this requires the [Parallels
|
||||||
Virtualization SDK](http://www.parallels.com/downloads/desktop/).
|
Virtualization SDK](https://www.parallels.com/downloads/desktop/).
|
||||||
|
|
||||||
The SDK can be installed by downloading and following the instructions in the
|
The SDK can be installed by downloading and following the instructions in the
|
||||||
dmg.
|
dmg.
|
||||||
|
|
|
@ -322,7 +322,7 @@ directory of the SSH user.
|
||||||
|
|
||||||
In order to perform extra customization of the virtual machine, a template can
|
In order to perform extra customization of the virtual machine, a template can
|
||||||
define extra calls to `VBoxManage` to perform.
|
define extra calls to `VBoxManage` to perform.
|
||||||
[VBoxManage](http://www.virtualbox.org/manual/ch08.html) is the command-line
|
[VBoxManage](https://www.virtualbox.org/manual/ch08.html) is the command-line
|
||||||
interface to VirtualBox where you can completely control VirtualBox. It can be
|
interface to VirtualBox where you can completely control VirtualBox. It can be
|
||||||
used to do things such as set RAM, CPUs, etc.
|
used to do things such as set RAM, CPUs, etc.
|
||||||
|
|
||||||
|
|
|
@ -226,7 +226,7 @@ directory of the SSH user.
|
||||||
|
|
||||||
In order to perform extra customization of the virtual machine, a template can
|
In order to perform extra customization of the virtual machine, a template can
|
||||||
define extra calls to `VBoxManage` to perform.
|
define extra calls to `VBoxManage` to perform.
|
||||||
[VBoxManage](http://www.virtualbox.org/manual/ch08.html) is the command-line
|
[VBoxManage](https://www.virtualbox.org/manual/ch08.html) is the command-line
|
||||||
interface to VirtualBox where you can completely control VirtualBox. It can be
|
interface to VirtualBox where you can completely control VirtualBox. It can be
|
||||||
used to do things such as set RAM, CPUs, etc.
|
used to do things such as set RAM, CPUs, etc.
|
||||||
|
|
||||||
|
|
|
@ -9,7 +9,7 @@ page_title: VirtualBox Builder
|
||||||
# VirtualBox Builder
|
# VirtualBox Builder
|
||||||
|
|
||||||
The VirtualBox Packer builder is able to create
|
The VirtualBox Packer builder is able to create
|
||||||
[VirtualBox](http://www.virtualbox.org) virtual machines and export them in the
|
[VirtualBox](https://www.virtualbox.org) virtual machines and export them in the
|
||||||
OVA or OVF format.
|
OVA or OVF format.
|
||||||
|
|
||||||
Packer actually comes with multiple builders able to create VirtualBox machines,
|
Packer actually comes with multiple builders able to create VirtualBox machines,
|
||||||
|
|
|
@ -15,12 +15,12 @@ Type: `vmware-iso`
|
||||||
|
|
||||||
This VMware Packer builder is able to create VMware virtual machines from an ISO
|
This VMware Packer builder is able to create VMware virtual machines from an ISO
|
||||||
file as a source. It currently supports building virtual machines on hosts
|
file as a source. It currently supports building virtual machines on hosts
|
||||||
running [VMware Fusion](http://www.vmware.com/products/fusion/overview.html) for
|
running [VMware Fusion](https://www.vmware.com/products/fusion/overview.html) for
|
||||||
OS X, [VMware
|
OS X, [VMware
|
||||||
Workstation](http://www.vmware.com/products/workstation/overview.html) for Linux
|
Workstation](https://www.vmware.com/products/workstation/overview.html) for Linux
|
||||||
and Windows, and [VMware Player](http://www.vmware.com/products/player/) on
|
and Windows, and [VMware Player](https://www.vmware.com/products/player/) on
|
||||||
Linux. It can also build machines directly on [VMware vSphere
|
Linux. It can also build machines directly on [VMware vSphere
|
||||||
Hypervisor](http://www.vmware.com/products/vsphere-hypervisor/) using SSH as
|
Hypervisor](https://www.vmware.com/products/vsphere-hypervisor/) using SSH as
|
||||||
opposed to the vSphere API.
|
opposed to the vSphere API.
|
||||||
|
|
||||||
The builder builds a virtual machine by creating a new virtual machine from
|
The builder builds a virtual machine by creating a new virtual machine from
|
||||||
|
@ -105,7 +105,7 @@ builder.
|
||||||
default is "1", which corresponds to a growable virtual disk split in
|
default is "1", which corresponds to a growable virtual disk split in
|
||||||
2GB files. This option is for advanced usage, modify only if you know what
|
2GB files. This option is for advanced usage, modify only if you know what
|
||||||
you're doing. For more information, please consult the [Virtual Disk Manager
|
you're doing. For more information, please consult the [Virtual Disk Manager
|
||||||
User's Guide](http://www.vmware.com/pdf/VirtualDiskManager.pdf) for desktop
|
User's Guide](https://www.vmware.com/pdf/VirtualDiskManager.pdf) for desktop
|
||||||
VMware clients. For ESXi, refer to the proper ESXi documentation.
|
VMware clients. For ESXi, refer to the proper ESXi documentation.
|
||||||
|
|
||||||
- `floppy_files` (array of strings) - A list of files to place onto a floppy
|
- `floppy_files` (array of strings) - A list of files to place onto a floppy
|
||||||
|
@ -187,6 +187,10 @@ builder.
|
||||||
the remote machine. By default this is empty. This only has an effect if
|
the remote machine. By default this is empty. This only has an effect if
|
||||||
`remote_type` is enabled.
|
`remote_type` is enabled.
|
||||||
|
|
||||||
|
- `remote_private_key_file` (string) - The path to the PEM encoded private key
|
||||||
|
file for the user used to access the remote machine. By default this is empty.
|
||||||
|
This only has an effect if `remote_type` is enabled.
|
||||||
|
|
||||||
- `remote_type` (string) - The type of remote machine that will be used to
|
- `remote_type` (string) - The type of remote machine that will be used to
|
||||||
build this VM rather than a local desktop product. The only value accepted
|
build this VM rather than a local desktop product. The only value accepted
|
||||||
for this currently is "esx5". If this is not set, a desktop product will
|
for this currently is "esx5". If this is not set, a desktop product will
|
||||||
|
@ -398,6 +402,8 @@ modify as well:
|
||||||
|
|
||||||
- `remote_password` - The SSH password for access to the remote machine.
|
- `remote_password` - The SSH password for access to the remote machine.
|
||||||
|
|
||||||
|
- `remote_private_key_file` - The SSH key for access to the remote machine.
|
||||||
|
|
||||||
- `format` (string) - Either "ovf", "ova" or "vmx", this specifies the output
|
- `format` (string) - Either "ovf", "ova" or "vmx", this specifies the output
|
||||||
format of the exported virtual machine. This defaults to "ovf".
|
format of the exported virtual machine. This defaults to "ovf".
|
||||||
Before using this option, you need to install `ovftool`.
|
Before using this option, you need to install `ovftool`.
|
||||||
|
|
|
@ -15,10 +15,10 @@ Type: `vmware-vmx`
|
||||||
This VMware Packer builder is able to create VMware virtual machines from an
|
This VMware Packer builder is able to create VMware virtual machines from an
|
||||||
existing VMware virtual machine (a VMX file). It currently supports building
|
existing VMware virtual machine (a VMX file). It currently supports building
|
||||||
virtual machines on hosts running [VMware Fusion
|
virtual machines on hosts running [VMware Fusion
|
||||||
Professional](http://www.vmware.com/products/fusion-professional/) for OS X,
|
Professional](https://www.vmware.com/products/fusion-professional/) for OS X,
|
||||||
[VMware Workstation](http://www.vmware.com/products/workstation/overview.html)
|
[VMware Workstation](https://www.vmware.com/products/workstation/overview.html)
|
||||||
for Linux and Windows, and [VMware
|
for Linux and Windows, and [VMware
|
||||||
Player](http://www.vmware.com/products/player/) on Linux.
|
Player](https://www.vmware.com/products/player/) on Linux.
|
||||||
|
|
||||||
The builder builds a virtual machine by cloning the VMX file using the clone
|
The builder builds a virtual machine by cloning the VMX file using the clone
|
||||||
capabilities introduced in VMware Fusion Professional 6, Workstation 10, and
|
capabilities introduced in VMware Fusion Professional 6, Workstation 10, and
|
||||||
|
|
|
@ -26,7 +26,7 @@ both the post-processor and push commands can be used independently.
|
||||||
scripts, to Atlas. Take care not to upload files that you don't intend to, like
|
scripts, to Atlas. Take care not to upload files that you don't intend to, like
|
||||||
secrets or large binaries. **If you have secrets in your Packer template, you
|
secrets or large binaries. **If you have secrets in your Packer template, you
|
||||||
should [move them into environment
|
should [move them into environment
|
||||||
variables](https://packer.io/docs/templates/user-variables.html).**
|
variables](https://www.packer.io/docs/templates/user-variables.html).**
|
||||||
|
|
||||||
Most push behavior is [configured in your packer
|
Most push behavior is [configured in your packer
|
||||||
template](/docs/templates/push.html). You can override or supplement your
|
template](/docs/templates/push.html). You can override or supplement your
|
||||||
|
@ -34,10 +34,6 @@ configuration using the options below.
|
||||||
|
|
||||||
## Options
|
## Options
|
||||||
|
|
||||||
- `-message` - A message to identify the purpose or changes in this Packer
|
|
||||||
template much like a VCS commit message. This message will be passed to the
|
|
||||||
Packer build service. This option is also available as a short option `-m`.
|
|
||||||
|
|
||||||
- `-token` - Your access token for the Atlas API.
|
- `-token` - Your access token for the Atlas API.
|
||||||
|
|
||||||
-> Login to Atlas to [generate an Atlas
|
-> Login to Atlas to [generate an Atlas
|
||||||
|
@ -59,7 +55,7 @@ you can also use `-token` on the command line.
|
||||||
Push a Packer template:
|
Push a Packer template:
|
||||||
|
|
||||||
``` {.shell}
|
``` {.shell}
|
||||||
$ packer push -m "Updating the apache version" template.json
|
$ packer push template.json
|
||||||
```
|
```
|
||||||
|
|
||||||
Push a Packer template with a custom token:
|
Push a Packer template with a custom token:
|
||||||
|
@ -81,7 +77,7 @@ download it during the packer run.
|
||||||
|
|
||||||
If you want to build a private `.iso` file you can upload the `.iso` to a secure
|
If you want to build a private `.iso` file you can upload the `.iso` to a secure
|
||||||
file hosting service like [Amazon
|
file hosting service like [Amazon
|
||||||
S3](http://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html),
|
S3](https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html),
|
||||||
[Google Cloud
|
[Google Cloud
|
||||||
Storage](https://cloud.google.com/storage/docs/gsutil/commands/signurl), or
|
Storage](https://cloud.google.com/storage/docs/gsutil/commands/signurl), or
|
||||||
[Azure File
|
[Azure File
|
||||||
|
|
|
@ -157,7 +157,7 @@ it would be convenient to cache the file. This sort of caching is a core part of
|
||||||
Packer that is exposed to builders.
|
Packer that is exposed to builders.
|
||||||
|
|
||||||
The cache interface is `packer.Cache`. It behaves much like a Go
|
The cache interface is `packer.Cache`. It behaves much like a Go
|
||||||
[RWMutex](http://golang.org/pkg/sync/#RWMutex). The builder requests a "lock" on
|
[RWMutex](https://golang.org/pkg/sync/#RWMutex). The builder requests a "lock" on
|
||||||
certain cache keys, and is given exclusive access to that key for the duration
|
certain cache keys, and is given exclusive access to that key for the duration
|
||||||
of the lock. This locking mechanism allows multiple builders to share cache data
|
of the lock. This locking mechanism allows multiple builders to share cache data
|
||||||
even though they're running in parallel.
|
even though they're running in parallel.
|
||||||
|
|
|
@ -14,7 +14,7 @@ reading this, it is assumed that you're comfortable with Packer and also know
|
||||||
the [basics of how Plugins work](/docs/extend/plugins.html), from a user
|
the [basics of how Plugins work](/docs/extend/plugins.html), from a user
|
||||||
standpoint.
|
standpoint.
|
||||||
|
|
||||||
Packer plugins must be written in [Go](http://golang.org/), so it is also
|
Packer plugins must be written in [Go](https://golang.org/), so it is also
|
||||||
assumed that you're familiar with the language. This page will not be a Go
|
assumed that you're familiar with the language. This page will not be a Go
|
||||||
language tutorial. Thankfully, if you are familiar with Go, the Go toolchain
|
language tutorial. Thankfully, if you are familiar with Go, the Go toolchain
|
||||||
makes it extremely easy to develop Packer plugins.
|
makes it extremely easy to develop Packer plugins.
|
||||||
|
@ -36,7 +36,7 @@ uses, because they're completely isolated into the process space of the plugin
|
||||||
itself.
|
itself.
|
||||||
|
|
||||||
And, thanks to Go's
|
And, thanks to Go's
|
||||||
[interfaces](http://golang.org/doc/effective_go.html#interfaces_and_types), it
|
[interfaces](https://golang.org/doc/effective_go.html#interfaces_and_types), it
|
||||||
doesn't even look like inter-process communication is occurring. You just use
|
doesn't even look like inter-process communication is occurring. You just use
|
||||||
the interfaces like normal, but in fact they're being executed in a remote
|
the interfaces like normal, but in fact they're being executed in a remote
|
||||||
process. Pretty cool.
|
process. Pretty cool.
|
||||||
|
|
|
@ -32,9 +32,9 @@ After unzipping the package, the directory should contain a set of binary
|
||||||
programs, such as `packer`, `packer-build-amazon-ebs`, etc. The final step to
|
programs, such as `packer`, `packer-build-amazon-ebs`, etc. The final step to
|
||||||
installation is to make sure the directory you installed Packer to is on the
|
installation is to make sure the directory you installed Packer to is on the
|
||||||
PATH. See [this
|
PATH. See [this
|
||||||
page](http://stackoverflow.com/questions/14637979/how-to-permanently-set-path-on-linux)
|
page](https://stackoverflow.com/questions/14637979/how-to-permanently-set-path-on-linux)
|
||||||
for instructions on setting the PATH on Linux and Mac. [This
|
for instructions on setting the PATH on Linux and Mac. [This
|
||||||
page](http://stackoverflow.com/questions/1618280/where-can-i-set-path-to-make-exe-on-windows)
|
page](https://stackoverflow.com/questions/1618280/where-can-i-set-path-to-make-exe-on-windows)
|
||||||
contains instructions for setting the PATH on Windows.
|
contains instructions for setting the PATH on Windows.
|
||||||
|
|
||||||
## Verifying the Installation
|
## Verifying the Installation
|
||||||
|
|
|
@ -62,10 +62,10 @@ the log variable `PACKER_LOG_PATH` using powershell environment variables. For
|
||||||
example:
|
example:
|
||||||
|
|
||||||
$env:PACKER_LOG=1
|
$env:PACKER_LOG=1
|
||||||
$env:PACKER_LOG_PATH="packerlog.txt"
|
$env:PACKER_LOG_PATH="packerlog.txt"
|
||||||
|
|
||||||
If you find a bug with Packer, please include the detailed log by using a
|
If you find a bug with Packer, please include the detailed log by using a
|
||||||
service such as [gist](http://gist.github.com).
|
service such as [gist](https://gist.github.com).
|
||||||
|
|
||||||
## Issues Installing Ubuntu Packages
|
## Issues Installing Ubuntu Packages
|
||||||
|
|
||||||
|
|
|
@ -25,9 +25,9 @@ extracting the docker container and throwing away the EC2 instance.
|
||||||
|
|
||||||
After overriding the artifact with artifice, you can use it with other
|
After overriding the artifact with artifice, you can use it with other
|
||||||
post-processors like
|
post-processors like
|
||||||
[compress](https://packer.io/docs/post-processors/compress.html),
|
[compress](https://www.packer.io/docs/post-processors/compress.html),
|
||||||
[docker-push](https://packer.io/docs/post-processors/docker-push.html),
|
[docker-push](https://www.packer.io/docs/post-processors/docker-push.html),
|
||||||
[Atlas](https://packer.io/docs/post-processors/atlas.html), or a third-party
|
[Atlas](https://www.packer.io/docs/post-processors/atlas.html), or a third-party
|
||||||
post-processor.
|
post-processor.
|
||||||
|
|
||||||
Artifice allows you to use the familiar packer workflow to create a fresh,
|
Artifice allows you to use the familiar packer workflow to create a fresh,
|
||||||
|
@ -67,7 +67,7 @@ The configuration allows you to specify which files comprise your artifact.
|
||||||
This minimal example:
|
This minimal example:
|
||||||
|
|
||||||
1. Spins up a cloned VMware virtual machine
|
1. Spins up a cloned VMware virtual machine
|
||||||
2. Installs a [consul](https://consul.io/) release
|
2. Installs a [consul](https://www.consul.io/) release
|
||||||
3. Downloads the consul binary
|
3. Downloads the consul binary
|
||||||
4. Packages it into a `.tar.gz` file
|
4. Packages it into a `.tar.gz` file
|
||||||
5. Uploads it to Atlas.
|
5. Uploads it to Atlas.
|
||||||
|
|
|
@ -38,7 +38,7 @@ Here is an example workflow:
|
||||||
example `hashicorp/foobar`, to create the artifact in Atlas or update the
|
example `hashicorp/foobar`, to create the artifact in Atlas or update the
|
||||||
version if the artifact already exists
|
version if the artifact already exists
|
||||||
3. The new version is ready and available to be used in deployments with a tool
|
3. The new version is ready and available to be used in deployments with a tool
|
||||||
like [Terraform](https://terraform.io)
|
like [Terraform](https://www.terraform.io)
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
@ -88,8 +88,8 @@ you can also use `token` configuration option.
|
||||||
"access_key": "{{user `aws_access_key`}}",
|
"access_key": "{{user `aws_access_key`}}",
|
||||||
"secret_key": "{{user `aws_secret_key`}}",
|
"secret_key": "{{user `aws_secret_key`}}",
|
||||||
"region": "us-east-1",
|
"region": "us-east-1",
|
||||||
"source_ami": "ami-de0d9eb7",
|
"source_ami": "ami-72b9e018",
|
||||||
"instance_type": "t1.micro",
|
"instance_type": "t2.micro",
|
||||||
"ssh_username": "ubuntu",
|
"ssh_username": "ubuntu",
|
||||||
"ami_name": "atlas-example {{timestamp}}"
|
"ami_name": "atlas-example {{timestamp}}"
|
||||||
}],
|
}],
|
||||||
|
|
|
@ -18,12 +18,12 @@ Type: `vagrant-cloud`
|
||||||
|
|
||||||
The Packer Vagrant Cloud post-processor receives a Vagrant box from the
|
The Packer Vagrant Cloud post-processor receives a Vagrant box from the
|
||||||
`vagrant` post-processor and pushes it to Vagrant Cloud. [Vagrant
|
`vagrant` post-processor and pushes it to Vagrant Cloud. [Vagrant
|
||||||
Cloud](https://vagrantcloud.com) hosts and serves boxes to Vagrant, allowing you
|
Cloud](https://atlas.hashicorp.com) hosts and serves boxes to Vagrant, allowing you
|
||||||
to version and distribute boxes to an organization in a simple way.
|
to version and distribute boxes to an organization in a simple way.
|
||||||
|
|
||||||
You'll need to be familiar with Vagrant Cloud, have an upgraded account to
|
You'll need to be familiar with Vagrant Cloud, have an upgraded account to
|
||||||
enable box hosting, and be distributing your box via the [shorthand
|
enable box hosting, and be distributing your box via the [shorthand
|
||||||
name](http://docs.vagrantup.com/v2/cli/box.html) configuration.
|
name](https://docs.vagrantup.com/v2/cli/box.html) configuration.
|
||||||
|
|
||||||
## Workflow
|
## Workflow
|
||||||
|
|
||||||
|
|
|
@ -13,7 +13,7 @@ page_title: 'Vagrant Post-Processor'
|
||||||
Type: `vagrant`
|
Type: `vagrant`
|
||||||
|
|
||||||
The Packer Vagrant post-processor takes a build and converts the artifact into a
|
The Packer Vagrant post-processor takes a build and converts the artifact into a
|
||||||
valid [Vagrant](http://www.vagrantup.com) box, if it can. This lets you use
|
valid [Vagrant](https://www.vagrantup.com) box, if it can. This lets you use
|
||||||
Packer to automatically create arbitrarily complex Vagrant boxes, and is in fact
|
Packer to automatically create arbitrarily complex Vagrant boxes, and is in fact
|
||||||
how the official boxes distributed by Vagrant are created.
|
how the official boxes distributed by Vagrant are created.
|
||||||
|
|
||||||
|
@ -22,7 +22,7 @@ If you've never used a post-processor before, please read the documentation on
|
||||||
knowledge will be expected for the remainder of this document.
|
knowledge will be expected for the remainder of this document.
|
||||||
|
|
||||||
Because Vagrant boxes are
|
Because Vagrant boxes are
|
||||||
[provider-specific](http://docs.vagrantup.com/v2/boxes/format.html), the Vagrant
|
[provider-specific](https://docs.vagrantup.com/v2/boxes/format.html), the Vagrant
|
||||||
post-processor is hardcoded to understand how to convert the artifacts of
|
post-processor is hardcoded to understand how to convert the artifacts of
|
||||||
certain builders into proper boxes for their respective providers.
|
certain builders into proper boxes for their respective providers.
|
||||||
|
|
||||||
|
|
|
@ -15,7 +15,7 @@ Type: `ansible-local`
|
||||||
The `ansible-local` Packer provisioner configures Ansible to run on the machine
|
The `ansible-local` Packer provisioner configures Ansible to run on the machine
|
||||||
by Packer from local Playbook and Role files. Playbooks and Roles can be
|
by Packer from local Playbook and Role files. Playbooks and Roles can be
|
||||||
uploaded from your local machine to the remote machine. Ansible is run in [local
|
uploaded from your local machine to the remote machine. Ansible is run in [local
|
||||||
mode](http://docs.ansible.com/playbooks_delegation.html#local-playbooks) via the
|
mode](https://docs.ansible.com/ansible/playbooks_delegation.html#local-playbooks) via the
|
||||||
`ansible-playbook` command.
|
`ansible-playbook` command.
|
||||||
|
|
||||||
-> **Note:** Ansible will *not* be installed automatically by this
|
-> **Note:** Ansible will *not* be installed automatically by this
|
||||||
|
|
|
@ -50,6 +50,10 @@ configuration is actually required.
|
||||||
should use a custom configuration template. See the dedicated "Chef
|
should use a custom configuration template. See the dedicated "Chef
|
||||||
Configuration" section below for more details.
|
Configuration" section below for more details.
|
||||||
|
|
||||||
|
- `encrypted_data_bag_secret_path` (string) - The path to the file containing
|
||||||
|
the secret for encrypted data bags. By default, this is empty, so no secret
|
||||||
|
will be available.
|
||||||
|
|
||||||
- `execute_command` (string) - The command used to execute Chef. This has
|
- `execute_command` (string) - The command used to execute Chef. This has
|
||||||
various [configuration template
|
various [configuration template
|
||||||
variables](/docs/templates/configuration-templates.html) available. See
|
variables](/docs/templates/configuration-templates.html) available. See
|
||||||
|
@ -71,9 +75,9 @@ configuration is actually required.
|
||||||
then the sudo will be omitted.
|
then the sudo will be omitted.
|
||||||
|
|
||||||
- `run_list` (array of strings) - The [run
|
- `run_list` (array of strings) - The [run
|
||||||
list](http://docs.chef.io/essentials_node_object_run_lists.html)
|
list](http://docs.chef.io/essentials_node_object_run_lists.html) for Chef.
|
||||||
for Chef. By default this is empty, and will use the run list sent down by
|
By default this is empty, and will use the run list sent down by the
|
||||||
the Chef Server.
|
Chef Server.
|
||||||
|
|
||||||
- `server_url` (string) - The URL to the Chef server. This is required.
|
- `server_url` (string) - The URL to the Chef server. This is required.
|
||||||
|
|
||||||
|
@ -136,6 +140,7 @@ This template is a [configuration
|
||||||
template](/docs/templates/configuration-templates.html) and has a set of
|
template](/docs/templates/configuration-templates.html) and has a set of
|
||||||
variables available to use:
|
variables available to use:
|
||||||
|
|
||||||
|
- `EncryptedDataBagSecretPath` - The path to the encrypted data bag secret
|
||||||
- `NodeName` - The node name set in the configuration.
|
- `NodeName` - The node name set in the configuration.
|
||||||
- `ServerUrl` - The URL of the Chef Server set in the configuration.
|
- `ServerUrl` - The URL of the Chef Server set in the configuration.
|
||||||
- `ValidationKeyPath` - Path to the validation key, if it is set.
|
- `ValidationKeyPath` - Path to the validation key, if it is set.
|
||||||
|
@ -181,3 +186,65 @@ to 777. This is to ensure that Packer can upload and make use of that directory.
|
||||||
However, once the machine is created, you usually don't want to keep these
|
However, once the machine is created, you usually don't want to keep these
|
||||||
directories with those permissions. To change the permissions on the
|
directories with those permissions. To change the permissions on the
|
||||||
directories, append a shell provisioner after Chef to modify them.
|
directories, append a shell provisioner after Chef to modify them.
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### Chef Client Local Mode
|
||||||
|
|
||||||
|
The following example shows how to run the `chef-cilent` provisioner in local
|
||||||
|
mode, while passing a `run_list` using a variable.
|
||||||
|
|
||||||
|
**Local environment variables**
|
||||||
|
|
||||||
|
# Machines Chef directory
|
||||||
|
export PACKER_CHEF_DIR=/var/chef-packer
|
||||||
|
# Comma separated run_list
|
||||||
|
export PACKER_CHEF_RUN_LIST="recipe[apt],recipe[nginx]"
|
||||||
|
...
|
||||||
|
|
||||||
|
**Packer variables**
|
||||||
|
|
||||||
|
Set the necessary Packer variables using environment variables or provide a [var
|
||||||
|
file](/docs/templates/user-variables.html).
|
||||||
|
|
||||||
|
``` {.liquid}
|
||||||
|
"variables": {
|
||||||
|
"chef_dir": "{{env `PACKER_CHEF_DIR`}}",
|
||||||
|
"chef_run_list": "{{env `PACKER_CHEF_RUN_LIST`}}",
|
||||||
|
"chef_client_config_tpl": "{{env `PACKER_CHEF_CLIENT_CONFIG_TPL`}}",
|
||||||
|
"packer_chef_bootstrap_dir": "{{env `PACKER_CHEF_BOOTSTRAP_DIR`}}" ,
|
||||||
|
"packer_uid": "{{env `PACKER_UID`}}",
|
||||||
|
"packer_gid": "{{env `PACKER_GID`}}"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Setup the** `chef-client` **provisioner**
|
||||||
|
|
||||||
|
Make sure we have the correct directories and permissions for the `chef-client`
|
||||||
|
provisioner. You will need to bootstrap the Chef run by providing the necessary
|
||||||
|
cookbooks using Berkshelf or some other means.
|
||||||
|
|
||||||
|
``` {.liquid}
|
||||||
|
{
|
||||||
|
"type": "file",
|
||||||
|
"source": "{{user `packer_chef_bootstrap_dir`}}",
|
||||||
|
"destination": "/tmp/bootstrap"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "shell",
|
||||||
|
"inline": [
|
||||||
|
"sudo mkdir -p {{user `chef_dir`}}",
|
||||||
|
"sudo mkdir -p /tmp/packer-chef-client",
|
||||||
|
"sudo chown {{user `packer_uid`}}.{{user `packer_gid`}} /tmp/packer-chef-client",
|
||||||
|
"sudo sh /tmp/bootstrap/bootstrap.sh"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "chef-client",
|
||||||
|
"server_url": "http://localhost:8889",
|
||||||
|
"config_template": "{{user `chef_client_config_tpl`}}/client.rb.tpl",
|
||||||
|
"skip_clean_node": true,
|
||||||
|
"skip_clean_client": true,
|
||||||
|
"run_list": "{{user `chef_run_list`}}"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|