Merge branch 'master' into go-versions

This commit is contained in:
M. Marsh 2018-06-27 14:07:26 -07:00 committed by GitHub
commit 67aa8f3a74
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
222 changed files with 32745 additions and 2687 deletions

View File

@ -6,7 +6,8 @@ sudo: false
language: go language: go
go: go:
- 1.x - 1.9.x
- 1.10.x
install: install:
- make deps - make deps

View File

@ -1,13 +1,59 @@
## (UNRELEASED) ## 1.2.4 (May 29, 2018)
### BUG FIXES: ### BUG FIXES:
* builder/vmware-esxi: Remove floppy files from the remote server on cleanup. [GH-6206] * builder/amazon: Can now force the chroot builder to mount an entire block
* core: When using `-on-error=[abort|ask]`, output the error to the user. [GH-6252] device instead of a partition [GH-6194]
* builder/azure: windows-sql-cloud is now in the default list of projects to
check for provided images. [GH-6210]
* builder/chroot: A new template option, `nvme_device_path` has been added to
provide a workaround for users who need the amazon-chroot builder to mount
a NVMe volume on their instances. [GH-6295]
* builder/hyper-v: Fix command for mounting multiple disks [GH-6267]
* builder/hyperv: Enable IP retrieval for Server 2008 R2 hosts. [GH-6219]
* builder/hyperv: Fix bug in MAC address specification on Hyper-V. [GH-6187]
* builder/parallels-pvm: Add missing disk compaction step. [GH-6202]
* builder/vmware-esxi: Remove floppy files from the remote server on cleanup.
[GH-6206]
* communicator/winrm: Updated dependencies to fix a race condition [GH-6261]
* core: When using `-on-error=[abort|ask]`, output the error to the user.
[GH-6252]
* provisioner/puppet: Extra-Arguments are no longer prematurely
interpolated.[GH-6215]
* provisioner/shell: Remove file stat that was causing problems uploading files
[GH-6239]
### IMPROVEMENTS: ### IMPROVEMENTS:
* builder/amazon: Amazon builders other than `chroot` now support T2 unlimited
instances [GH-6265]
* builder/azure: Allow device login for US government cloud. [GH-6105]
* builder/azure: Devicelogin Support for Windows [GH-6285]
* builder/azure: Enable simultaneous builds within one resource group.
[GH-6231]
* builder/azure: Faster deletion of Azure Resource Groups. [GH-6269]
* builder/azure: Updated Azure SDK to v15.0.0 [GH-6224] * builder/azure: Updated Azure SDK to v15.0.0 [GH-6224]
* builder/hyper-v: Hyper-V builds now connect to vnc display by default when
building [GH-6243]
* builder/hyper-v: New `use_fixed_vhd_format` allows vm export in an Azure-
compatible format [GH-6101]
* builder/hyperv: New config option for specifying what secure boot template to
use, allowing secure boot of linux vms. [GH-5883]
* builder/qemu: Add support for hvf accelerator. [GH-6193]
* builder/scaleway: Fix SSH communicator connection issue. [GH-6238]
* core: Add opt-in Packer top-level command autocomplete [GH-5454]
* post-processor/shell-local: New options have been added to create feature
parity with the shell-local provisioner. This feature now works on Windows
hosts. [GH-5956]
* provisioner/chef: New config option allows user to skip cleanup of chef
client staging directory. [GH-4300]
* provisioner/shell-local: Can now access automatically-generated WinRM
password as variable [GH-6251]
* provisoner/shell-local: New options have been added to create feature parity
with the shell-local post-processor. This feature now works on Windows
hosts. [GH-5956]
* builder/virtualbox: Use HTTPS to download guest editions, now that it's
available. [GH-6406]
## 1.2.3 (April 25, 2018) ## 1.2.3 (April 25, 2018)

View File

@ -4,11 +4,13 @@ VET?=$(shell ls -d */ | grep -v vendor | grep -v website)
GITSHA:=$(shell git rev-parse HEAD) GITSHA:=$(shell git rev-parse HEAD)
# Get the current local branch name from git (if we can, this may be blank) # Get the current local branch name from git (if we can, this may be blank)
GITBRANCH:=$(shell git symbolic-ref --short HEAD 2>/dev/null) GITBRANCH:=$(shell git symbolic-ref --short HEAD 2>/dev/null)
GOFMT_FILES?=$$(find . -not -path "./vendor/*" -name "*.go")
GOOS=$(shell go env GOOS) GOOS=$(shell go env GOOS)
GOARCH=$(shell go env GOARCH) GOARCH=$(shell go env GOARCH)
GOPATH=$(shell go env GOPATH) GOPATH=$(shell go env GOPATH)
# gofmt
UNFORMATTED_FILES=$(shell find . -not -path "./vendor/*" -name "*.go" | xargs gofmt -s -l)
# Get the git commit # Get the git commit
GIT_DIRTY=$(shell test -n "`git status --porcelain`" && echo "+CHANGES" || true) GIT_DIRTY=$(shell test -n "`git status --porcelain`" && echo "+CHANGES" || true)
GIT_COMMIT=$(shell git rev-parse --short HEAD) GIT_COMMIT=$(shell git rev-parse --short HEAD)
@ -58,10 +60,18 @@ dev: deps ## Build and install a development build
@cp $(GOPATH)/bin/packer pkg/$(GOOS)_$(GOARCH) @cp $(GOPATH)/bin/packer pkg/$(GOOS)_$(GOARCH)
fmt: ## Format Go code fmt: ## Format Go code
@gofmt -w -s $(GOFMT_FILES) @gofmt -w -s $(UNFORMATTED_FILES)
fmt-check: ## Check go code formatting fmt-check: ## Check go code formatting
$(CURDIR)/scripts/gofmtcheck.sh $(GOFMT_FILES) @echo "==> Checking that code complies with gofmt requirements..."
@if [ ! -z "$(UNFORMATTED_FILES)" ]; then \
echo "gofmt needs to be run on the following files:"; \
echo "$(UNFORMATTED_FILES)" | xargs -n1; \
echo "You can use the command: \`make fmt\` to reformat code."; \
exit 1; \
else \
echo "Check passed."; \
fi
fmt-docs: fmt-docs:
@find ./website/source/docs -name "*.md" -exec pandoc --wrap auto --columns 79 --atx-headers -s -f "markdown_github+yaml_metadata_block" -t "markdown_github+yaml_metadata_block" {} -o {} \; @find ./website/source/docs -name "*.md" -exec pandoc --wrap auto --columns 79 --atx-headers -s -f "markdown_github+yaml_metadata_block" -t "markdown_github+yaml_metadata_block" {} -o {} \;

View File

@ -32,10 +32,6 @@ The images that Packer creates can easily be turned into
[Vagrant](http://www.vagrantup.com) boxes. [Vagrant](http://www.vagrantup.com) boxes.
## Quick Start ## Quick Start
Download and install packages and dependencies
```
go get github.com/hashicorp/packer
```
**Note:** There is a great **Note:** There is a great
[introduction and getting started guide](https://www.packer.io/intro) [introduction and getting started guide](https://www.packer.io/intro)

View File

@ -94,7 +94,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
&stepCheckAlicloudSourceImage{ &stepCheckAlicloudSourceImage{
SourceECSImageId: b.config.AlicloudSourceImage, SourceECSImageId: b.config.AlicloudSourceImage,
}, },
&StepConfigAlicloudKeyPair{ &stepConfigAlicloudKeyPair{
Debug: b.config.PackerDebug, Debug: b.config.PackerDebug,
KeyPairName: b.config.SSHKeyPairName, KeyPairName: b.config.SSHKeyPairName,
PrivateKeyFile: b.config.Comm.SSHPrivateKey, PrivateKeyFile: b.config.Comm.SSHPrivateKey,
@ -136,10 +136,11 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
ZoneId: b.config.ZoneId, ZoneId: b.config.ZoneId,
}) })
if b.chooseNetworkType() == VpcNet { if b.chooseNetworkType() == VpcNet {
steps = append(steps, &setpConfigAlicloudEIP{ steps = append(steps, &stepConfigAlicloudEIP{
AssociatePublicIpAddress: b.config.AssociatePublicIpAddress, AssociatePublicIpAddress: b.config.AssociatePublicIpAddress,
RegionId: b.config.AlicloudRegion, RegionId: b.config.AlicloudRegion,
InternetChargeType: b.config.InternetChargeType, InternetChargeType: b.config.InternetChargeType,
InternetMaxBandwidthOut: b.config.InternetMaxBandwidthOut,
}) })
} else { } else {
steps = append(steps, &stepConfigAlicloudPublicIP{ steps = append(steps, &stepConfigAlicloudPublicIP{
@ -147,7 +148,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
}) })
} }
steps = append(steps, steps = append(steps,
&stepAttachKeyPar{}, &stepAttachKeyPair{},
&stepRunAlicloudInstance{}, &stepRunAlicloudInstance{},
&stepMountAlicloudDisk{}, &stepMountAlicloudDisk{},
&communicator.StepConnect{ &communicator.StepConnect{
@ -170,12 +171,12 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
AlicloudImageName: b.config.AlicloudImageName, AlicloudImageName: b.config.AlicloudImageName,
}, },
&stepCreateAlicloudImage{}, &stepCreateAlicloudImage{},
&setpRegionCopyAlicloudImage{ &stepRegionCopyAlicloudImage{
AlicloudImageDestinationRegions: b.config.AlicloudImageDestinationRegions, AlicloudImageDestinationRegions: b.config.AlicloudImageDestinationRegions,
AlicloudImageDestinationNames: b.config.AlicloudImageDestinationNames, AlicloudImageDestinationNames: b.config.AlicloudImageDestinationNames,
RegionId: b.config.AlicloudRegion, RegionId: b.config.AlicloudRegion,
}, },
&setpShareAlicloudImage{ &stepShareAlicloudImage{
AlicloudImageShareAccounts: b.config.AlicloudImageShareAccounts, AlicloudImageShareAccounts: b.config.AlicloudImageShareAccounts,
AlicloudImageUNShareAccounts: b.config.AlicloudImageUNShareAccounts, AlicloudImageUNShareAccounts: b.config.AlicloudImageUNShareAccounts,
RegionId: b.config.AlicloudRegion, RegionId: b.config.AlicloudRegion,

View File

@ -12,10 +12,10 @@ import (
"github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/packer"
) )
type stepAttachKeyPar struct { type stepAttachKeyPair struct {
} }
func (s *stepAttachKeyPar) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { func (s *stepAttachKeyPair) Run(_ context.Context, state multistep.StateBag) multistep.StepAction {
keyPairName := state.Get("keyPair").(string) keyPairName := state.Get("keyPair").(string)
if keyPairName == "" { if keyPairName == "" {
return multistep.ActionContinue return multistep.ActionContinue
@ -50,7 +50,7 @@ func (s *stepAttachKeyPar) Run(_ context.Context, state multistep.StateBag) mult
return multistep.ActionContinue return multistep.ActionContinue
} }
func (s *stepAttachKeyPar) Cleanup(state multistep.StateBag) { func (s *stepAttachKeyPair) Cleanup(state multistep.StateBag) {
keyPairName := state.Get("keyPair").(string) keyPairName := state.Get("keyPair").(string)
if keyPairName == "" { if keyPairName == "" {
return return

View File

@ -10,20 +10,22 @@ import (
"github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/packer"
) )
type setpConfigAlicloudEIP struct { type stepConfigAlicloudEIP struct {
AssociatePublicIpAddress bool AssociatePublicIpAddress bool
RegionId string RegionId string
InternetChargeType string InternetChargeType string
InternetMaxBandwidthOut int
allocatedId string allocatedId string
} }
func (s *setpConfigAlicloudEIP) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { func (s *stepConfigAlicloudEIP) Run(_ context.Context, state multistep.StateBag) multistep.StepAction {
client := state.Get("client").(*ecs.Client) client := state.Get("client").(*ecs.Client)
ui := state.Get("ui").(packer.Ui) ui := state.Get("ui").(packer.Ui)
instance := state.Get("instance").(*ecs.InstanceAttributesType) instance := state.Get("instance").(*ecs.InstanceAttributesType)
ui.Say("Allocating eip") ui.Say("Allocating eip")
ipaddress, allocateId, err := client.AllocateEipAddress(&ecs.AllocateEipAddressArgs{ ipaddress, allocateId, err := client.AllocateEipAddress(&ecs.AllocateEipAddressArgs{
RegionId: common.Region(s.RegionId), InternetChargeType: common.InternetChargeType(s.InternetChargeType), RegionId: common.Region(s.RegionId), InternetChargeType: common.InternetChargeType(s.InternetChargeType),
Bandwidth: s.InternetMaxBandwidthOut,
}) })
if err != nil { if err != nil {
state.Put("error", err) state.Put("error", err)
@ -55,7 +57,7 @@ func (s *setpConfigAlicloudEIP) Run(_ context.Context, state multistep.StateBag)
return multistep.ActionContinue return multistep.ActionContinue
} }
func (s *setpConfigAlicloudEIP) Cleanup(state multistep.StateBag) { func (s *stepConfigAlicloudEIP) Cleanup(state multistep.StateBag) {
if len(s.allocatedId) == 0 { if len(s.allocatedId) == 0 {
return return
} }

View File

@ -13,7 +13,7 @@ import (
"github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/packer"
) )
type StepConfigAlicloudKeyPair struct { type stepConfigAlicloudKeyPair struct {
Debug bool Debug bool
SSHAgentAuth bool SSHAgentAuth bool
DebugKeyPath string DebugKeyPath string
@ -25,7 +25,7 @@ type StepConfigAlicloudKeyPair struct {
keyName string keyName string
} }
func (s *StepConfigAlicloudKeyPair) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { func (s *stepConfigAlicloudKeyPair) Run(_ context.Context, state multistep.StateBag) multistep.StepAction {
ui := state.Get("ui").(packer.Ui) ui := state.Get("ui").(packer.Ui)
if s.PrivateKeyFile != "" { if s.PrivateKeyFile != "" {
@ -108,7 +108,7 @@ func (s *StepConfigAlicloudKeyPair) Run(_ context.Context, state multistep.State
return multistep.ActionContinue return multistep.ActionContinue
} }
func (s *StepConfigAlicloudKeyPair) Cleanup(state multistep.StateBag) { func (s *stepConfigAlicloudKeyPair) Cleanup(state multistep.StateBag) {
// If no key name is set, then we never created it, so just return // If no key name is set, then we never created it, so just return
// If we used an SSH private key file, do not go about deleting // If we used an SSH private key file, do not go about deleting
// keypairs // keypairs

View File

@ -85,7 +85,8 @@ func (s *stepConfigAlicloudVPC) Cleanup(state multistep.StateBag) {
e, _ := err.(*common.Error) e, _ := err.(*common.Error)
if (e.Code == "DependencyViolation.Instance" || e.Code == "DependencyViolation.RouteEntry" || if (e.Code == "DependencyViolation.Instance" || e.Code == "DependencyViolation.RouteEntry" ||
e.Code == "DependencyViolation.VSwitch" || e.Code == "DependencyViolation.VSwitch" ||
e.Code == "DependencyViolation.SecurityGroup") && time.Now().Before(timeoutPoint) { e.Code == "DependencyViolation.SecurityGroup" ||
e.Code == "Forbbiden") && time.Now().Before(timeoutPoint) {
time.Sleep(1 * time.Second) time.Sleep(1 * time.Second)
continue continue
} }

View File

@ -10,13 +10,13 @@ import (
"github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/packer"
) )
type setpRegionCopyAlicloudImage struct { type stepRegionCopyAlicloudImage struct {
AlicloudImageDestinationRegions []string AlicloudImageDestinationRegions []string
AlicloudImageDestinationNames []string AlicloudImageDestinationNames []string
RegionId string RegionId string
} }
func (s *setpRegionCopyAlicloudImage) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { func (s *stepRegionCopyAlicloudImage) Run(_ context.Context, state multistep.StateBag) multistep.StepAction {
if len(s.AlicloudImageDestinationRegions) == 0 { if len(s.AlicloudImageDestinationRegions) == 0 {
return multistep.ActionContinue return multistep.ActionContinue
} }
@ -52,7 +52,7 @@ func (s *setpRegionCopyAlicloudImage) Run(_ context.Context, state multistep.Sta
return multistep.ActionContinue return multistep.ActionContinue
} }
func (s *setpRegionCopyAlicloudImage) Cleanup(state multistep.StateBag) { func (s *stepRegionCopyAlicloudImage) Cleanup(state multistep.StateBag) {
_, cancelled := state.GetOk(multistep.StateCancelled) _, cancelled := state.GetOk(multistep.StateCancelled)
_, halted := state.GetOk(multistep.StateHalted) _, halted := state.GetOk(multistep.StateHalted)
if cancelled || halted { if cancelled || halted {

View File

@ -10,13 +10,13 @@ import (
"github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/packer"
) )
type setpShareAlicloudImage struct { type stepShareAlicloudImage struct {
AlicloudImageShareAccounts []string AlicloudImageShareAccounts []string
AlicloudImageUNShareAccounts []string AlicloudImageUNShareAccounts []string
RegionId string RegionId string
} }
func (s *setpShareAlicloudImage) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { func (s *stepShareAlicloudImage) Run(_ context.Context, state multistep.StateBag) multistep.StepAction {
client := state.Get("client").(*ecs.Client) client := state.Get("client").(*ecs.Client)
ui := state.Get("ui").(packer.Ui) ui := state.Get("ui").(packer.Ui)
alicloudImages := state.Get("alicloudimages").(map[string]string) alicloudImages := state.Get("alicloudimages").(map[string]string)
@ -37,7 +37,7 @@ func (s *setpShareAlicloudImage) Run(_ context.Context, state multistep.StateBag
return multistep.ActionContinue return multistep.ActionContinue
} }
func (s *setpShareAlicloudImage) Cleanup(state multistep.StateBag) { func (s *stepShareAlicloudImage) Cleanup(state multistep.StateBag) {
_, cancelled := state.GetOk(multistep.StateCancelled) _, cancelled := state.GetOk(multistep.StateCancelled)
_, halted := state.GetOk(multistep.StateHalted) _, halted := state.GetOk(multistep.StateHalted)
if cancelled || halted { if cancelled || halted {

View File

@ -33,6 +33,7 @@ type Config struct {
CommandWrapper string `mapstructure:"command_wrapper"` CommandWrapper string `mapstructure:"command_wrapper"`
CopyFiles []string `mapstructure:"copy_files"` CopyFiles []string `mapstructure:"copy_files"`
DevicePath string `mapstructure:"device_path"` DevicePath string `mapstructure:"device_path"`
NVMEDevicePath string `mapstructure:"nvme_device_path"`
FromScratch bool `mapstructure:"from_scratch"` FromScratch bool `mapstructure:"from_scratch"`
MountOptions []string `mapstructure:"mount_options"` MountOptions []string `mapstructure:"mount_options"`
MountPartition string `mapstructure:"mount_partition"` MountPartition string `mapstructure:"mount_partition"`

View File

@ -3,8 +3,8 @@ package chroot
import ( import (
"fmt" "fmt"
sl "github.com/hashicorp/packer/common/shell-local"
"github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/packer"
"github.com/hashicorp/packer/post-processor/shell-local"
"github.com/hashicorp/packer/template/interpolate" "github.com/hashicorp/packer/template/interpolate"
) )
@ -21,7 +21,9 @@ func RunLocalCommands(commands []string, wrappedCommand CommandWrapper, ctx inte
} }
ui.Say(fmt.Sprintf("Executing command: %s", command)) ui.Say(fmt.Sprintf("Executing command: %s", command))
comm := &shell_local.Communicator{} comm := &sl.Communicator{
ExecuteCommand: []string{"sh", "-c", command},
}
cmd := &packer.RemoteCmd{Command: command} cmd := &packer.RemoteCmd{Command: command}
if err := cmd.StartWithUi(comm, ui); err != nil { if err := cmd.StartWithUi(comm, ui); err != nil {
return fmt.Errorf("Error executing command: %s", err) return fmt.Errorf("Error executing command: %s", err)

View File

@ -35,6 +35,10 @@ func (s *StepMountDevice) Run(_ context.Context, state multistep.StateBag) multi
config := state.Get("config").(*Config) config := state.Get("config").(*Config)
ui := state.Get("ui").(packer.Ui) ui := state.Get("ui").(packer.Ui)
device := state.Get("device").(string) device := state.Get("device").(string)
if config.NVMEDevicePath != "" {
// customizable device path for mounting NVME block devices on c5 and m5 HVM
device = config.NVMEDevicePath
}
wrappedCommand := state.Get("wrappedCommand").(CommandWrapper) wrappedCommand := state.Get("wrappedCommand").(CommandWrapper)
var virtualizationType string var virtualizationType string
@ -47,6 +51,7 @@ func (s *StepMountDevice) Run(_ context.Context, state multistep.StateBag) multi
} }
ctx := config.ctx ctx := config.ctx
ctx.Data = &mountPathData{Device: filepath.Base(device)} ctx.Data = &mountPathData{Device: filepath.Base(device)}
mountPath, err := interpolate.Render(config.MountPath, &ctx) mountPath, err := interpolate.Render(config.MountPath, &ctx)
@ -98,7 +103,7 @@ func (s *StepMountDevice) Run(_ context.Context, state multistep.StateBag) multi
ui.Error(err.Error()) ui.Error(err.Error())
return multistep.ActionHalt return multistep.ActionHalt
} }
log.Printf("[DEBUG] (step mount) mount command is %s", mountCommand)
cmd := ShellCommand(mountCommand) cmd := ShellCommand(mountCommand)
cmd.Stderr = stderr cmd.Stderr = stderr
if err := cmd.Run(); err != nil { if err := cmd.Run(); err != nil {

View File

@ -5,6 +5,7 @@ func listEC2Regions() []string {
return []string{ return []string{
"ap-northeast-1", "ap-northeast-1",
"ap-northeast-2", "ap-northeast-2",
"ap-northeast-3",
"ap-south-1", "ap-south-1",
"ap-southeast-1", "ap-southeast-1",
"ap-southeast-2", "ap-southeast-2",

View File

@ -1,11 +1,11 @@
package common package common
import ( import (
"errors"
"fmt" "fmt"
"net" "net"
"os" "os"
"regexp" "regexp"
"strings"
"time" "time"
"github.com/hashicorp/packer/common/uuid" "github.com/hashicorp/packer/common/uuid"
@ -30,25 +30,26 @@ func (d *AmiFilterOptions) Empty() bool {
type RunConfig struct { type RunConfig struct {
AssociatePublicIpAddress bool `mapstructure:"associate_public_ip_address"` AssociatePublicIpAddress bool `mapstructure:"associate_public_ip_address"`
AvailabilityZone string `mapstructure:"availability_zone"` AvailabilityZone string `mapstructure:"availability_zone"`
DisableStopInstance bool `mapstructure:"disable_stop_instance"`
EbsOptimized bool `mapstructure:"ebs_optimized"` EbsOptimized bool `mapstructure:"ebs_optimized"`
EnableT2Unlimited bool `mapstructure:"enable_t2_unlimited"`
IamInstanceProfile string `mapstructure:"iam_instance_profile"` IamInstanceProfile string `mapstructure:"iam_instance_profile"`
InstanceInitiatedShutdownBehavior string `mapstructure:"shutdown_behavior"`
InstanceType string `mapstructure:"instance_type"` InstanceType string `mapstructure:"instance_type"`
RunTags map[string]string `mapstructure:"run_tags"` RunTags map[string]string `mapstructure:"run_tags"`
SecurityGroupId string `mapstructure:"security_group_id"`
SecurityGroupIds []string `mapstructure:"security_group_ids"`
SourceAmi string `mapstructure:"source_ami"` SourceAmi string `mapstructure:"source_ami"`
SourceAmiFilter AmiFilterOptions `mapstructure:"source_ami_filter"` SourceAmiFilter AmiFilterOptions `mapstructure:"source_ami_filter"`
SpotPrice string `mapstructure:"spot_price"` SpotPrice string `mapstructure:"spot_price"`
SpotPriceAutoProduct string `mapstructure:"spot_price_auto_product"` SpotPriceAutoProduct string `mapstructure:"spot_price_auto_product"`
DisableStopInstance bool `mapstructure:"disable_stop_instance"`
SecurityGroupId string `mapstructure:"security_group_id"`
SecurityGroupIds []string `mapstructure:"security_group_ids"`
TemporarySGSourceCidr string `mapstructure:"temporary_security_group_source_cidr"`
SubnetId string `mapstructure:"subnet_id"` SubnetId string `mapstructure:"subnet_id"`
TemporaryKeyPairName string `mapstructure:"temporary_key_pair_name"` TemporaryKeyPairName string `mapstructure:"temporary_key_pair_name"`
TemporarySGSourceCidr string `mapstructure:"temporary_security_group_source_cidr"`
UserData string `mapstructure:"user_data"` UserData string `mapstructure:"user_data"`
UserDataFile string `mapstructure:"user_data_file"` UserDataFile string `mapstructure:"user_data_file"`
WindowsPasswordTimeout time.Duration `mapstructure:"windows_password_timeout"`
VpcId string `mapstructure:"vpc_id"` VpcId string `mapstructure:"vpc_id"`
InstanceInitiatedShutdownBehavior string `mapstructure:"shutdown_behavior"` WindowsPasswordTimeout time.Duration `mapstructure:"windows_password_timeout"`
// Communicator settings // Communicator settings
Comm communicator.Config `mapstructure:",squash"` Comm communicator.Config `mapstructure:",squash"`
@ -84,32 +85,39 @@ func (c *RunConfig) Prepare(ctx *interpolate.Context) []error {
c.SSHInterface != "public_dns" && c.SSHInterface != "public_dns" &&
c.SSHInterface != "private_dns" && c.SSHInterface != "private_dns" &&
c.SSHInterface != "" { c.SSHInterface != "" {
errs = append(errs, errors.New(fmt.Sprintf("Unknown interface type: %s", c.SSHInterface))) errs = append(errs, fmt.Errorf("Unknown interface type: %s", c.SSHInterface))
} }
if c.SSHKeyPairName != "" { if c.SSHKeyPairName != "" {
if c.Comm.Type == "winrm" && c.Comm.WinRMPassword == "" && c.Comm.SSHPrivateKey == "" { if c.Comm.Type == "winrm" && c.Comm.WinRMPassword == "" && c.Comm.SSHPrivateKey == "" {
errs = append(errs, errors.New("ssh_private_key_file must be provided to retrieve the winrm password when using ssh_keypair_name.")) errs = append(errs, fmt.Errorf("ssh_private_key_file must be provided to retrieve the winrm password when using ssh_keypair_name."))
} else if c.Comm.SSHPrivateKey == "" && !c.Comm.SSHAgentAuth { } else if c.Comm.SSHPrivateKey == "" && !c.Comm.SSHAgentAuth {
errs = append(errs, errors.New("ssh_private_key_file must be provided or ssh_agent_auth enabled when ssh_keypair_name is specified.")) errs = append(errs, fmt.Errorf("ssh_private_key_file must be provided or ssh_agent_auth enabled when ssh_keypair_name is specified."))
} }
} }
if c.SourceAmi == "" && c.SourceAmiFilter.Empty() { if c.SourceAmi == "" && c.SourceAmiFilter.Empty() {
errs = append(errs, errors.New("A source_ami or source_ami_filter must be specified")) errs = append(errs, fmt.Errorf("A source_ami or source_ami_filter must be specified"))
} }
if c.InstanceType == "" { if c.InstanceType == "" {
errs = append(errs, errors.New("An instance_type must be specified")) errs = append(errs, fmt.Errorf("An instance_type must be specified"))
} }
if c.SpotPrice == "auto" { if c.SpotPrice == "auto" {
if c.SpotPriceAutoProduct == "" { if c.SpotPriceAutoProduct == "" {
errs = append(errs, errors.New( errs = append(errs, fmt.Errorf(
"spot_price_auto_product must be specified when spot_price is auto")) "spot_price_auto_product must be specified when spot_price is auto"))
} }
} }
if c.SpotPriceAutoProduct != "" {
if c.SpotPrice != "auto" {
errs = append(errs, fmt.Errorf(
"spot_price should be set to auto when spot_price_auto_product is specified"))
}
}
if c.UserData != "" && c.UserDataFile != "" { if c.UserData != "" && c.UserDataFile != "" {
errs = append(errs, fmt.Errorf("Only one of user_data or user_data_file can be specified.")) errs = append(errs, fmt.Errorf("Only one of user_data or user_data_file can be specified."))
} else if c.UserDataFile != "" { } else if c.UserDataFile != "" {
@ -141,6 +149,18 @@ func (c *RunConfig) Prepare(ctx *interpolate.Context) []error {
errs = append(errs, fmt.Errorf("shutdown_behavior only accepts 'stop' or 'terminate' values.")) errs = append(errs, fmt.Errorf("shutdown_behavior only accepts 'stop' or 'terminate' values."))
} }
if c.EnableT2Unlimited {
if c.SpotPrice != "" {
errs = append(errs, fmt.Errorf("Error: T2 Unlimited cannot be used in conjuction with Spot Instances"))
}
firstDotIndex := strings.Index(c.InstanceType, ".")
if firstDotIndex == -1 {
errs = append(errs, fmt.Errorf("Error determining main Instance Type from: %s", c.InstanceType))
} else if c.InstanceType[0:firstDotIndex] != "t2" {
errs = append(errs, fmt.Errorf("Error: T2 Unlimited enabled with a non-T2 Instance Type: %s", c.InstanceType))
}
}
return errs return errs
} }

View File

@ -48,7 +48,7 @@ func TestRunConfigPrepare_InstanceType(t *testing.T) {
c := testConfig() c := testConfig()
c.InstanceType = "" c.InstanceType = ""
if err := c.Prepare(nil); len(err) != 1 { if err := c.Prepare(nil); len(err) != 1 {
t.Fatalf("err: %s", err) t.Fatalf("Should error if an instance_type is not specified")
} }
} }
@ -56,14 +56,14 @@ func TestRunConfigPrepare_SourceAmi(t *testing.T) {
c := testConfig() c := testConfig()
c.SourceAmi = "" c.SourceAmi = ""
if err := c.Prepare(nil); len(err) != 1 { if err := c.Prepare(nil); len(err) != 1 {
t.Fatalf("err: %s", err) t.Fatalf("Should error if a source_ami (or source_ami_filter) is not specified")
} }
} }
func TestRunConfigPrepare_SourceAmiFilterBlank(t *testing.T) { func TestRunConfigPrepare_SourceAmiFilterBlank(t *testing.T) {
c := testConfigFilter() c := testConfigFilter()
if err := c.Prepare(nil); len(err) != 1 { if err := c.Prepare(nil); len(err) != 1 {
t.Fatalf("err: %s", err) t.Fatalf("Should error if source_ami_filter is empty or not specified (and source_ami is not specified)")
} }
} }
@ -79,17 +79,58 @@ func TestRunConfigPrepare_SourceAmiFilterGood(t *testing.T) {
} }
} }
func TestRunConfigPrepare_EnableT2UnlimitedGood(t *testing.T) {
c := testConfig()
// Must have a T2 instance type if T2 Unlimited is enabled
c.InstanceType = "t2.micro"
c.EnableT2Unlimited = true
err := c.Prepare(nil)
if len(err) > 0 {
t.Fatalf("err: %s", err)
}
}
func TestRunConfigPrepare_EnableT2UnlimitedBadInstanceType(t *testing.T) {
c := testConfig()
// T2 Unlimited cannot be used with instance types other than T2
c.InstanceType = "m5.large"
c.EnableT2Unlimited = true
err := c.Prepare(nil)
if len(err) != 1 {
t.Fatalf("Should error if T2 Unlimited is enabled with non-T2 instance_type")
}
}
func TestRunConfigPrepare_EnableT2UnlimitedBadWithSpotInstanceRequest(t *testing.T) {
c := testConfig()
// T2 Unlimited cannot be used with Spot Instances
c.InstanceType = "t2.micro"
c.EnableT2Unlimited = true
c.SpotPrice = "auto"
c.SpotPriceAutoProduct = "Linux/UNIX"
err := c.Prepare(nil)
if len(err) != 1 {
t.Fatalf("Should error if T2 Unlimited has been used in conjuntion with a Spot Price request")
}
}
func TestRunConfigPrepare_SpotAuto(t *testing.T) { func TestRunConfigPrepare_SpotAuto(t *testing.T) {
c := testConfig() c := testConfig()
c.SpotPrice = "auto" c.SpotPrice = "auto"
if err := c.Prepare(nil); len(err) != 1 { if err := c.Prepare(nil); len(err) != 1 {
t.Fatalf("err: %s", err) t.Fatalf("Should error if spot_price_auto_product is not set and spot_price is set to auto")
} }
// Good - SpotPrice and SpotPriceAutoProduct are correctly set
c.SpotPriceAutoProduct = "foo" c.SpotPriceAutoProduct = "foo"
if err := c.Prepare(nil); len(err) != 0 { if err := c.Prepare(nil); len(err) != 0 {
t.Fatalf("err: %s", err) t.Fatalf("err: %s", err)
} }
c.SpotPrice = ""
if err := c.Prepare(nil); len(err) != 1 {
t.Fatalf("Should error if spot_price is not set to auto and spot_price_auto_product is set")
}
} }
func TestRunConfigPrepare_SSHPort(t *testing.T) { func TestRunConfigPrepare_SSHPort(t *testing.T) {
@ -125,7 +166,7 @@ func TestRunConfigPrepare_UserData(t *testing.T) {
c.UserData = "foo" c.UserData = "foo"
c.UserDataFile = tf.Name() c.UserDataFile = tf.Name()
if err := c.Prepare(nil); len(err) != 1 { if err := c.Prepare(nil); len(err) != 1 {
t.Fatalf("err: %s", err) t.Fatalf("Should error if user_data string and user_data_file have both been specified")
} }
} }
@ -137,7 +178,7 @@ func TestRunConfigPrepare_UserDataFile(t *testing.T) {
c.UserDataFile = "idontexistidontthink" c.UserDataFile = "idontexistidontthink"
if err := c.Prepare(nil); len(err) != 1 { if err := c.Prepare(nil); len(err) != 1 {
t.Fatalf("err: %s", err) t.Fatalf("Should error if the file specified by user_data_file does not exist")
} }
tf, err := ioutil.TempFile("", "packer") tf, err := ioutil.TempFile("", "packer")

View File

@ -24,6 +24,7 @@ type StepRunSourceInstance struct {
Ctx interpolate.Context Ctx interpolate.Context
Debug bool Debug bool
EbsOptimized bool EbsOptimized bool
EnableT2Unlimited bool
ExpectedRootDevice string ExpectedRootDevice string
IamInstanceProfile string IamInstanceProfile string
InstanceInitiatedShutdownBehavior string InstanceInitiatedShutdownBehavior string
@ -116,6 +117,11 @@ func (s *StepRunSourceInstance) Run(ctx context.Context, state multistep.StateBa
EbsOptimized: &s.EbsOptimized, EbsOptimized: &s.EbsOptimized,
} }
if s.EnableT2Unlimited {
creditOption := "unlimited"
runOpts.CreditSpecification = &ec2.CreditSpecificationRequest{CpuCredits: &creditOption}
}
// Collect tags for tagging on resource creation // Collect tags for tagging on resource creation
var tagSpecs []*ec2.TagSpecification var tagSpecs []*ec2.TagSpecification

View File

@ -148,6 +148,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
Ctx: b.config.ctx, Ctx: b.config.ctx,
Debug: b.config.PackerDebug, Debug: b.config.PackerDebug,
EbsOptimized: b.config.EbsOptimized, EbsOptimized: b.config.EbsOptimized,
EnableT2Unlimited: b.config.EnableT2Unlimited,
ExpectedRootDevice: "ebs", ExpectedRootDevice: "ebs",
IamInstanceProfile: b.config.IamInstanceProfile, IamInstanceProfile: b.config.IamInstanceProfile,
InstanceInitiatedShutdownBehavior: b.config.InstanceInitiatedShutdownBehavior, InstanceInitiatedShutdownBehavior: b.config.InstanceInitiatedShutdownBehavior,

View File

@ -162,6 +162,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
Ctx: b.config.ctx, Ctx: b.config.ctx,
Debug: b.config.PackerDebug, Debug: b.config.PackerDebug,
EbsOptimized: b.config.EbsOptimized, EbsOptimized: b.config.EbsOptimized,
EnableT2Unlimited: b.config.EnableT2Unlimited,
ExpectedRootDevice: "ebs", ExpectedRootDevice: "ebs",
IamInstanceProfile: b.config.IamInstanceProfile, IamInstanceProfile: b.config.IamInstanceProfile,
InstanceInitiatedShutdownBehavior: b.config.InstanceInitiatedShutdownBehavior, InstanceInitiatedShutdownBehavior: b.config.InstanceInitiatedShutdownBehavior,

View File

@ -145,6 +145,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
Ctx: b.config.ctx, Ctx: b.config.ctx,
Debug: b.config.PackerDebug, Debug: b.config.PackerDebug,
EbsOptimized: b.config.EbsOptimized, EbsOptimized: b.config.EbsOptimized,
EnableT2Unlimited: b.config.EnableT2Unlimited,
ExpectedRootDevice: "ebs", ExpectedRootDevice: "ebs",
IamInstanceProfile: b.config.IamInstanceProfile, IamInstanceProfile: b.config.IamInstanceProfile,
InstanceInitiatedShutdownBehavior: b.config.InstanceInitiatedShutdownBehavior, InstanceInitiatedShutdownBehavior: b.config.InstanceInitiatedShutdownBehavior,

View File

@ -230,6 +230,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
Ctx: b.config.ctx, Ctx: b.config.ctx,
Debug: b.config.PackerDebug, Debug: b.config.PackerDebug,
EbsOptimized: b.config.EbsOptimized, EbsOptimized: b.config.EbsOptimized,
EnableT2Unlimited: b.config.EnableT2Unlimited,
IamInstanceProfile: b.config.IamInstanceProfile, IamInstanceProfile: b.config.IamInstanceProfile,
InstanceType: b.config.InstanceType, InstanceType: b.config.InstanceType,
IsRestricted: b.config.IsChinaCloud() || b.config.IsGovCloud(), IsRestricted: b.config.IsChinaCloud() || b.config.IsGovCloud(),

View File

@ -10,11 +10,11 @@ import (
"strings" "strings"
"time" "time"
packerAzureCommon "github.com/hashicorp/packer/builder/azure/common"
armstorage "github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage" armstorage "github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage"
"github.com/Azure/azure-sdk-for-go/storage" "github.com/Azure/azure-sdk-for-go/storage"
"github.com/Azure/go-autorest/autorest/adal" "github.com/Azure/go-autorest/autorest/adal"
"github.com/dgrijalva/jwt-go"
packerAzureCommon "github.com/hashicorp/packer/builder/azure/common"
"github.com/hashicorp/packer/builder/azure/common/constants" "github.com/hashicorp/packer/builder/azure/common/constants"
"github.com/hashicorp/packer/builder/azure/common/lin" "github.com/hashicorp/packer/builder/azure/common/lin"
packerCommon "github.com/hashicorp/packer/common" packerCommon "github.com/hashicorp/packer/common"
@ -52,6 +52,7 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) {
} }
func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packer.Artifact, error) { func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packer.Artifact, error) {
ui.Say("Running builder ...") ui.Say("Running builder ...")
ctx, cancel := context.WithCancel(context.Background()) ctx, cancel := context.WithCancel(context.Background())
@ -90,6 +91,15 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
if err := resolver.Resolve(b.config); err != nil { if err := resolver.Resolve(b.config); err != nil {
return nil, err return nil, err
} }
if b.config.ObjectID == "" {
b.config.ObjectID = getObjectIdFromToken(ui, spnCloud)
} else {
ui.Message("You have provided Object_ID which is no longer needed, azure packer builder determines this dynamically from the authentication token")
}
if b.config.ObjectID == "" && b.config.OSType != constants.Target_Linux {
return nil, fmt.Errorf("could not determine the ObjectID for the user, which is required for Windows builds")
}
if b.config.isManagedImage() { if b.config.isManagedImage() {
group, err := azureClient.GroupsClient.Get(ctx, b.config.ManagedImageResourceGroupName) group, err := azureClient.GroupsClient.Get(ctx, b.config.ManagedImageResourceGroupName)
@ -371,10 +381,17 @@ func (b *Builder) getServicePrincipalTokens(say func(string)) (*adal.ServicePrin
var err error var err error
if b.config.useDeviceLogin { if b.config.useDeviceLogin {
servicePrincipalToken, err = packerAzureCommon.Authenticate(*b.config.cloudEnvironment, b.config.TenantID, say) say("Getting auth token for Service management endpoint")
servicePrincipalToken, err = packerAzureCommon.Authenticate(*b.config.cloudEnvironment, b.config.TenantID, say, b.config.cloudEnvironment.ServiceManagementEndpoint)
if err != nil { if err != nil {
return nil, nil, err return nil, nil, err
} }
say("Getting token for Vault resource")
servicePrincipalTokenVault, err = packerAzureCommon.Authenticate(*b.config.cloudEnvironment, b.config.TenantID, say, strings.TrimRight(b.config.cloudEnvironment.KeyVaultEndpoint, "/"))
if err != nil {
return nil, nil, err
}
} else { } else {
auth := NewAuthenticate(*b.config.cloudEnvironment, b.config.ClientID, b.config.ClientSecret, b.config.TenantID) auth := NewAuthenticate(*b.config.cloudEnvironment, b.config.ClientID, b.config.ClientSecret, b.config.TenantID)
@ -385,11 +402,39 @@ func (b *Builder) getServicePrincipalTokens(say func(string)) (*adal.ServicePrin
servicePrincipalTokenVault, err = auth.getServicePrincipalTokenWithResource( servicePrincipalTokenVault, err = auth.getServicePrincipalTokenWithResource(
strings.TrimRight(b.config.cloudEnvironment.KeyVaultEndpoint, "/")) strings.TrimRight(b.config.cloudEnvironment.KeyVaultEndpoint, "/"))
if err != nil { if err != nil {
return nil, nil, err return nil, nil, err
} }
}
err = servicePrincipalToken.EnsureFresh()
if err != nil {
return nil, nil, err
}
err = servicePrincipalTokenVault.EnsureFresh()
if err != nil {
return nil, nil, err
} }
return servicePrincipalToken, servicePrincipalTokenVault, nil return servicePrincipalToken, servicePrincipalTokenVault, nil
} }
func getObjectIdFromToken(ui packer.Ui, token *adal.ServicePrincipalToken) string {
claims := jwt.MapClaims{}
var p jwt.Parser
var err error
_, _, err = p.ParseUnverified(token.OAuthToken(), claims)
if err != nil {
ui.Error(fmt.Sprintf("Failed to parse the token,Error: %s", err.Error()))
return ""
}
return claims["oid"].(string)
}

View File

@ -10,9 +10,9 @@ package arm
// * ARM_STORAGE_ACCOUNT // * ARM_STORAGE_ACCOUNT
// //
// The subscription in question should have a resource group // The subscription in question should have a resource group
// called "packer-acceptance-test" in "West US" region. The // called "packer-acceptance-test" in "South Central US" region. The
// storage account refered to in the above variable should // storage account refered to in the above variable should
// be inside this resource group and in "West US" as well. // be inside this resource group and in "South Central US" as well.
// //
// In addition, the PACKER_ACC variable should also be set to // In addition, the PACKER_ACC variable should also be set to
// a non-empty value to enable Packer acceptance tests and the // a non-empty value to enable Packer acceptance tests and the
@ -23,9 +23,13 @@ package arm
import ( import (
"testing" "testing"
"fmt"
builderT "github.com/hashicorp/packer/helper/builder/testing" builderT "github.com/hashicorp/packer/helper/builder/testing"
"os"
) )
const DeviceLoginAcceptanceTest = "DEVICELOGIN_TEST"
func TestBuilderAcc_ManagedDisk_Windows(t *testing.T) { func TestBuilderAcc_ManagedDisk_Windows(t *testing.T) {
builderT.Test(t, builderT.TestCase{ builderT.Test(t, builderT.TestCase{
PreCheck: func() { testAccPreCheck(t) }, PreCheck: func() { testAccPreCheck(t) },
@ -34,6 +38,28 @@ func TestBuilderAcc_ManagedDisk_Windows(t *testing.T) {
}) })
} }
func TestBuilderAcc_ManagedDisk_Windows_Build_Resource_Group(t *testing.T) {
builderT.Test(t, builderT.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Builder: &Builder{},
Template: testBuilderAccManagedDiskWindowsBuildResourceGroup,
})
}
func TestBuilderAcc_ManagedDisk_Windows_DeviceLogin(t *testing.T) {
if os.Getenv(DeviceLoginAcceptanceTest) == "" {
t.Skip(fmt.Sprintf(
"Device Login Acceptance tests skipped unless env '%s' set, as its requires manual step during execution",
DeviceLoginAcceptanceTest))
return
}
builderT.Test(t, builderT.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Builder: &Builder{},
Template: testBuilderAccManagedDiskWindowsDeviceLogin,
})
}
func TestBuilderAcc_ManagedDisk_Linux(t *testing.T) { func TestBuilderAcc_ManagedDisk_Linux(t *testing.T) {
builderT.Test(t, builderT.TestCase{ builderT.Test(t, builderT.TestCase{
PreCheck: func() { testAccPreCheck(t) }, PreCheck: func() { testAccPreCheck(t) },
@ -42,6 +68,20 @@ func TestBuilderAcc_ManagedDisk_Linux(t *testing.T) {
}) })
} }
func TestBuilderAcc_ManagedDisk_Linux_DeviceLogin(t *testing.T) {
if os.Getenv(DeviceLoginAcceptanceTest) == "" {
t.Skip(fmt.Sprintf(
"Device Login Acceptance tests skipped unless env '%s' set, as its requires manual step during execution",
DeviceLoginAcceptanceTest))
return
}
builderT.Test(t, builderT.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Builder: &Builder{},
Template: testBuilderAccManagedDiskLinuxDeviceLogin,
})
}
func TestBuilderAcc_Blob_Windows(t *testing.T) { func TestBuilderAcc_Blob_Windows(t *testing.T) {
builderT.Test(t, builderT.TestCase{ builderT.Test(t, builderT.TestCase{
PreCheck: func() { testAccPreCheck(t) }, PreCheck: func() { testAccPreCheck(t) },
@ -65,8 +105,7 @@ const testBuilderAccManagedDiskWindows = `
"variables": { "variables": {
"client_id": "{{env ` + "`ARM_CLIENT_ID`" + `}}", "client_id": "{{env ` + "`ARM_CLIENT_ID`" + `}}",
"client_secret": "{{env ` + "`ARM_CLIENT_SECRET`" + `}}", "client_secret": "{{env ` + "`ARM_CLIENT_SECRET`" + `}}",
"subscription_id": "{{env ` + "`ARM_SUBSCRIPTION_ID`" + `}}", "subscription_id": "{{env ` + "`ARM_SUBSCRIPTION_ID`" + `}}"
"object_id": "{{env ` + "`ARM_OBJECT_ID`" + `}}"
}, },
"builders": [{ "builders": [{
"type": "test", "type": "test",
@ -74,7 +113,6 @@ const testBuilderAccManagedDiskWindows = `
"client_id": "{{user ` + "`client_id`" + `}}", "client_id": "{{user ` + "`client_id`" + `}}",
"client_secret": "{{user ` + "`client_secret`" + `}}", "client_secret": "{{user ` + "`client_secret`" + `}}",
"subscription_id": "{{user ` + "`subscription_id`" + `}}", "subscription_id": "{{user ` + "`subscription_id`" + `}}",
"object_id": "{{user ` + "`object_id`" + `}}",
"managed_image_resource_group_name": "packer-acceptance-test", "managed_image_resource_group_name": "packer-acceptance-test",
"managed_image_name": "testBuilderAccManagedDiskWindows-{{timestamp}}", "managed_image_name": "testBuilderAccManagedDiskWindows-{{timestamp}}",
@ -89,8 +127,73 @@ const testBuilderAccManagedDiskWindows = `
"winrm_insecure": "true", "winrm_insecure": "true",
"winrm_timeout": "3m", "winrm_timeout": "3m",
"winrm_username": "packer", "winrm_username": "packer",
"async_resourcegroup_delete": "true",
"location": "West US", "location": "South Central US",
"vm_size": "Standard_DS2_v2"
}]
}
`
const testBuilderAccManagedDiskWindowsBuildResourceGroup = `
{
"variables": {
"client_id": "{{env ` + "`ARM_CLIENT_ID`" + `}}",
"client_secret": "{{env ` + "`ARM_CLIENT_SECRET`" + `}}",
"subscription_id": "{{env ` + "`ARM_SUBSCRIPTION_ID`" + `}}"
},
"builders": [{
"type": "test",
"client_id": "{{user ` + "`client_id`" + `}}",
"client_secret": "{{user ` + "`client_secret`" + `}}",
"subscription_id": "{{user ` + "`subscription_id`" + `}}",
"build_resource_group_name" : "packer-acceptance-test",
"managed_image_resource_group_name": "packer-acceptance-test",
"managed_image_name": "testBuilderAccManagedDiskWindows-{{timestamp}}",
"os_type": "Windows",
"image_publisher": "MicrosoftWindowsServer",
"image_offer": "WindowsServer",
"image_sku": "2012-R2-Datacenter",
"communicator": "winrm",
"winrm_use_ssl": "true",
"winrm_insecure": "true",
"winrm_timeout": "3m",
"winrm_username": "packer",
"async_resourcegroup_delete": "true",
"vm_size": "Standard_DS2_v2"
}]
}
`
const testBuilderAccManagedDiskWindowsDeviceLogin = `
{
"variables": {
"subscription_id": "{{env ` + "`ARM_SUBSCRIPTION_ID`" + `}}"
},
"builders": [{
"type": "test",
"subscription_id": "{{user ` + "`subscription_id`" + `}}",
"managed_image_resource_group_name": "packer-acceptance-test",
"managed_image_name": "testBuilderAccManagedDiskWindowsDeviceLogin-{{timestamp}}",
"os_type": "Windows",
"image_publisher": "MicrosoftWindowsServer",
"image_offer": "WindowsServer",
"image_sku": "2012-R2-Datacenter",
"communicator": "winrm",
"winrm_use_ssl": "true",
"winrm_insecure": "true",
"winrm_timeout": "3m",
"winrm_username": "packer",
"location": "South Central US",
"vm_size": "Standard_DS2_v2" "vm_size": "Standard_DS2_v2"
}] }]
} }
@ -118,7 +221,31 @@ const testBuilderAccManagedDiskLinux = `
"image_offer": "UbuntuServer", "image_offer": "UbuntuServer",
"image_sku": "16.04-LTS", "image_sku": "16.04-LTS",
"location": "West US", "location": "South Central US",
"vm_size": "Standard_DS2_v2"
}]
}
`
const testBuilderAccManagedDiskLinuxDeviceLogin = `
{
"variables": {
"subscription_id": "{{env ` + "`ARM_SUBSCRIPTION_ID`" + `}}"
},
"builders": [{
"type": "test",
"subscription_id": "{{user ` + "`subscription_id`" + `}}",
"managed_image_resource_group_name": "packer-acceptance-test",
"managed_image_name": "testBuilderAccManagedDiskLinuxDeviceLogin-{{timestamp}}",
"os_type": "Linux",
"image_publisher": "Canonical",
"image_offer": "UbuntuServer",
"image_sku": "16.04-LTS",
"async_resourcegroup_delete": "true",
"location": "South Central US",
"vm_size": "Standard_DS2_v2" "vm_size": "Standard_DS2_v2"
}] }]
} }
@ -157,7 +284,7 @@ const testBuilderAccBlobWindows = `
"winrm_timeout": "3m", "winrm_timeout": "3m",
"winrm_username": "packer", "winrm_username": "packer",
"location": "West US", "location": "South Central US",
"vm_size": "Standard_DS2_v2" "vm_size": "Standard_DS2_v2"
}] }]
} }
@ -188,7 +315,7 @@ const testBuilderAccBlobLinux = `
"image_offer": "UbuntuServer", "image_offer": "UbuntuServer",
"image_sku": "16.04-LTS", "image_sku": "16.04-LTS",
"location": "West US", "location": "South Central US",
"vm_size": "Standard_DS2_v2" "vm_size": "Standard_DS2_v2"
}] }]
} }

View File

@ -493,9 +493,6 @@ func assertRequiredParametersSet(c *Config, errs *packer.MultiError) {
// readable by the ObjectID of the App. There may be another way to handle // readable by the ObjectID of the App. There may be another way to handle
// this case, but I am not currently aware of it - send feedback. // this case, but I am not currently aware of it - send feedback.
isUseDeviceLogin := func(c *Config) bool { isUseDeviceLogin := func(c *Config) bool {
if c.OSType == constants.Target_Windows {
return false
}
return c.SubscriptionID != "" && return c.SubscriptionID != "" &&
c.ClientID == "" && c.ClientID == "" &&

View File

@ -2,13 +2,11 @@ package arm
import ( import (
"fmt" "fmt"
"strings"
"testing" "testing"
"time" "time"
"github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute" "github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute"
"github.com/hashicorp/packer/builder/azure/common/constants" "github.com/hashicorp/packer/builder/azure/common/constants"
"github.com/hashicorp/packer/packer"
) )
// List of configuration parameters that are required by the ARM builder. // List of configuration parameters that are required by the ARM builder.
@ -448,39 +446,6 @@ func TestUserDeviceLoginIsEnabledForLinux(t *testing.T) {
} }
} }
func TestUseDeviceLoginIsDisabledForWindows(t *testing.T) {
config := map[string]string{
"capture_name_prefix": "ignore",
"capture_container_name": "ignore",
"image_offer": "ignore",
"image_publisher": "ignore",
"image_sku": "ignore",
"location": "ignore",
"storage_account": "ignore",
"resource_group_name": "ignore",
"subscription_id": "ignore",
"os_type": constants.Target_Windows,
"communicator": "none",
}
_, _, err := newConfig(config, getPackerConfiguration())
if err == nil {
t.Fatal("Expected test to fail, but it succeeded")
}
multiError, _ := err.(*packer.MultiError)
if len(multiError.Errors) != 2 {
t.Errorf("Expected to find 2 errors, but found %d errors", len(multiError.Errors))
}
if !strings.Contains(err.Error(), "client_id must be specified") {
t.Error("Expected to find error for 'client_id must be specified")
}
if !strings.Contains(err.Error(), "client_secret must be specified") {
t.Error("Expected to find error for 'client_secret must be specified")
}
}
func TestConfigShouldRejectMalformedCaptureNamePrefix(t *testing.T) { func TestConfigShouldRejectMalformedCaptureNamePrefix(t *testing.T) {
config := map[string]string{ config := map[string]string{
"capture_container_name": "ignore", "capture_container_name": "ignore",

View File

@ -82,6 +82,7 @@ func (s *StepDeleteResourceGroup) deleteDeploymentResources(ctx context.Context,
deploymentOperation := deploymentOperations.Value() deploymentOperation := deploymentOperations.Value()
// Sometimes an empty operation is added to the list by Azure // Sometimes an empty operation is added to the list by Azure
if deploymentOperation.Properties.TargetResource == nil { if deploymentOperation.Properties.TargetResource == nil {
deploymentOperations.Next()
continue continue
} }

View File

@ -185,6 +185,7 @@ func (s *StepDeployTemplate) Cleanup(state multistep.StateBag) {
deploymentOperation := deploymentOperations.Value() deploymentOperation := deploymentOperations.Value()
// Sometimes an empty operation is added to the list by Azure // Sometimes an empty operation is added to the list by Azure
if deploymentOperation.Properties.TargetResource == nil { if deploymentOperation.Properties.TargetResource == nil {
deploymentOperations.Next()
continue continue
} }
ui.Say(fmt.Sprintf(" -> %s : '%s'", ui.Say(fmt.Sprintf(" -> %s : '%s'",

View File

@ -7,6 +7,7 @@ import (
"os" "os"
"path/filepath" "path/filepath"
"regexp" "regexp"
"strings"
"github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-06-01/subscriptions" "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-06-01/subscriptions"
"github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest"
@ -40,8 +41,10 @@ var (
// Authenticate fetches a token from the local file cache or initiates a consent // Authenticate fetches a token from the local file cache or initiates a consent
// flow and waits for token to be obtained. // flow and waits for token to be obtained.
func Authenticate(env azure.Environment, tenantID string, say func(string)) (*adal.ServicePrincipalToken, error) { func Authenticate(env azure.Environment, tenantID string, say func(string), scope string) (*adal.ServicePrincipalToken, error) {
clientID, ok := clientIDs[env.Name] clientID, ok := clientIDs[env.Name]
var resourceid string
if !ok { if !ok {
return nil, fmt.Errorf("packer-azure application not set up for Azure environment %q", env.Name) return nil, fmt.Errorf("packer-azure application not set up for Azure environment %q", env.Name)
} }
@ -53,9 +56,14 @@ func Authenticate(env azure.Environment, tenantID string, say func(string)) (*ad
// for AzurePublicCloud (https://management.core.windows.net/), this old // for AzurePublicCloud (https://management.core.windows.net/), this old
// Service Management scope covers both ASM and ARM. // Service Management scope covers both ASM and ARM.
apiScope := env.ServiceManagementEndpoint
tokenPath := tokenCachePath(tenantID) if strings.Contains(scope, "vault") {
resourceid = "vault"
} else {
resourceid = "mgmt"
}
tokenPath := tokenCachePath(tenantID + resourceid)
saveToken := mkTokenCallback(tokenPath) saveToken := mkTokenCallback(tokenPath)
saveTokenCallback := func(t adal.Token) error { saveTokenCallback := func(t adal.Token) error {
say("Azure token expired. Saving the refreshed token...") say("Azure token expired. Saving the refreshed token...")
@ -63,41 +71,18 @@ func Authenticate(env azure.Environment, tenantID string, say func(string)) (*ad
} }
// Lookup the token cache file for an existing token. // Lookup the token cache file for an existing token.
spt, err := tokenFromFile(say, *oauthCfg, tokenPath, clientID, apiScope, saveTokenCallback) spt, err := tokenFromFile(say, *oauthCfg, tokenPath, clientID, scope, saveTokenCallback)
if err != nil { if err != nil {
return nil, err return nil, err
} }
if spt != nil { if spt != nil {
say(fmt.Sprintf("Auth token found in file: %s", tokenPath)) say(fmt.Sprintf("Auth token found in file: %s", tokenPath))
return spt, nil
// NOTE(ahmetalpbalkan): The token file we found may contain an
// expired access_token. In that case, the first call to Azure SDK will
// attempt to refresh the token using refresh_token, which might have
// expired[1], in that case we will get an error and we shall remove the
// token file and initiate token flow again so that the user would not
// need removing the token cache file manually.
//
// [1]: expiration date of refresh_token is not returned in AAD /token
// response, we just know it is 14 days. Therefore users token
// will go stale every 14 days and we will delete the token file,
// re-initiate the device flow.
say("Validating the token.")
if err = validateToken(env, spt); err != nil {
say(fmt.Sprintf("Error: %v", err))
say("Stored Azure credentials expired. Please reauthenticate.")
say(fmt.Sprintf("Deleting %s", tokenPath))
if err := os.RemoveAll(tokenPath); err != nil {
return nil, fmt.Errorf("Error deleting stale token file: %v", err)
}
} else {
say("Token works.")
return spt, nil
}
} }
// Start an OAuth 2.0 device flow // Start an OAuth 2.0 device flow
say(fmt.Sprintf("Initiating device flow: %s", tokenPath)) say(fmt.Sprintf("Initiating device flow: %s", tokenPath))
spt, err = tokenFromDeviceFlow(say, *oauthCfg, clientID, apiScope) spt, err = tokenFromDeviceFlow(say, *oauthCfg, clientID, scope)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -183,20 +168,6 @@ func mkTokenCallback(path string) adal.TokenRefreshCallback {
} }
} }
// validateToken makes a call to Azure SDK with given token, essentially making
// sure if the access_token valid, if not it uses SDKs functionality to
// automatically refresh the token using refresh_token (which might have
// expired). This check is essentially to make sure refresh_token is good.
func validateToken(env azure.Environment, token *adal.ServicePrincipalToken) error {
c := subscriptions.NewClientWithBaseURI(env.ResourceManagerEndpoint)
c.Authorizer = autorest.NewBearerAuthorizer(token)
_, err := c.List(context.TODO())
if err != nil {
return fmt.Errorf("Token validity check failed: %v", err)
}
return nil
}
// FindTenantID figures out the AAD tenant ID of the subscription by making an // FindTenantID figures out the AAD tenant ID of the subscription by making an
// unauthenticated request to the Get Subscription Details endpoint and parses // unauthenticated request to the Get Subscription Details endpoint and parses
// the value from WWW-Authenticate header. // the value from WWW-Authenticate header.

View File

@ -111,6 +111,7 @@ func (s *stepSnapshot) Run(_ context.Context, state multistep.StateBag) multiste
ui.Error(err.Error()) ui.Error(err.Error())
return multistep.ActionHalt return multistep.ActionHalt
} }
snapshotRegions = append(snapshotRegions, c.Region)
log.Printf("Snapshot image ID: %d", imageId) log.Printf("Snapshot image ID: %d", imageId)
state.Put("snapshot_image_id", imageId) state.Put("snapshot_image_id", imageId)

View File

@ -168,7 +168,7 @@ func (d *driverGCE) DeleteDisk(zone, name string) (<-chan error, error) {
} }
func (d *driverGCE) GetImage(name string, fromFamily bool) (*Image, error) { func (d *driverGCE) GetImage(name string, fromFamily bool) (*Image, error) {
projects := []string{d.projectId, "centos-cloud", "coreos-cloud", "cos-cloud", "debian-cloud", "google-containers", "opensuse-cloud", "rhel-cloud", "suse-cloud", "ubuntu-os-cloud", "windows-cloud", "gce-nvme", "windows-sql-cloud"} projects := []string{d.projectId, "centos-cloud", "coreos-cloud", "cos-cloud", "debian-cloud", "google-containers", "opensuse-cloud", "rhel-cloud", "suse-cloud", "ubuntu-os-cloud", "windows-cloud", "gce-nvme", "windows-sql-cloud", "rhel-sap-cloud"}
var errs error var errs error
for _, project := range projects { for _, project := range projects {
image, err := d.GetImageFromProject(project, name, fromFamily) image, err := d.GetImageFromProject(project, name, fromFamily)

View File

@ -2,14 +2,14 @@ package openstack
import ( import (
"crypto/tls" "crypto/tls"
"fmt"
"os"
"crypto/x509" "crypto/x509"
"fmt"
"io/ioutil" "io/ioutil"
"os"
"github.com/gophercloud/gophercloud" "github.com/gophercloud/gophercloud"
"github.com/gophercloud/gophercloud/openstack" "github.com/gophercloud/gophercloud/openstack"
"github.com/gophercloud/utils/openstack/clientconfig"
"github.com/hashicorp/go-cleanhttp" "github.com/hashicorp/go-cleanhttp"
"github.com/hashicorp/packer/template/interpolate" "github.com/hashicorp/packer/template/interpolate"
) )
@ -30,6 +30,8 @@ type AccessConfig struct {
CACertFile string `mapstructure:"cacert"` CACertFile string `mapstructure:"cacert"`
ClientCertFile string `mapstructure:"cert"` ClientCertFile string `mapstructure:"cert"`
ClientKeyFile string `mapstructure:"key"` ClientKeyFile string `mapstructure:"key"`
Token string `mapstructure:"token"`
Cloud string `mapstructure:"cloud"`
osClient *gophercloud.ProviderClient osClient *gophercloud.ProviderClient
} }
@ -42,10 +44,6 @@ func (c *AccessConfig) Prepare(ctx *interpolate.Context) []error {
return []error{fmt.Errorf("Invalid endpoint type provided")} return []error{fmt.Errorf("Invalid endpoint type provided")}
} }
if c.Region == "" {
c.Region = os.Getenv("OS_REGION_NAME")
}
// Legacy RackSpace stuff. We're keeping this around to keep things BC. // Legacy RackSpace stuff. We're keeping this around to keep things BC.
if c.Password == "" { if c.Password == "" {
c.Password = os.Getenv("SDK_PASSWORD") c.Password = os.Getenv("SDK_PASSWORD")
@ -59,6 +57,15 @@ func (c *AccessConfig) Prepare(ctx *interpolate.Context) []error {
if c.Username == "" { if c.Username == "" {
c.Username = os.Getenv("SDK_USERNAME") c.Username = os.Getenv("SDK_USERNAME")
} }
// End RackSpace
if c.Cloud == "" {
c.Cloud = os.Getenv("OS_CLOUD")
}
if c.Region == "" {
c.Region = os.Getenv("OS_REGION_NAME")
}
if c.CACertFile == "" { if c.CACertFile == "" {
c.CACertFile = os.Getenv("OS_CACERT") c.CACertFile = os.Getenv("OS_CACERT")
} }
@ -69,8 +76,39 @@ func (c *AccessConfig) Prepare(ctx *interpolate.Context) []error {
c.ClientKeyFile = os.Getenv("OS_KEY") c.ClientKeyFile = os.Getenv("OS_KEY")
} }
// Get as much as possible from the end clientOpts := new(clientconfig.ClientOpts)
ao, _ := openstack.AuthOptionsFromEnv()
// If a cloud entry was given, base AuthOptions on a clouds.yaml file.
if c.Cloud != "" {
clientOpts.Cloud = c.Cloud
cloud, err := clientconfig.GetCloudFromYAML(clientOpts)
if err != nil {
return []error{err}
}
if c.Region == "" && cloud.RegionName != "" {
c.Region = cloud.RegionName
}
} else {
authInfo := &clientconfig.AuthInfo{
AuthURL: c.IdentityEndpoint,
DomainID: c.DomainID,
DomainName: c.DomainName,
Password: c.Password,
ProjectID: c.TenantID,
ProjectName: c.TenantName,
Token: c.Token,
Username: c.Username,
UserID: c.UserID,
}
clientOpts.AuthInfo = authInfo
}
ao, err := clientconfig.AuthOptions(clientOpts)
if err != nil {
return []error{err}
}
// Make sure we reauth as needed // Make sure we reauth as needed
ao.AllowReauth = true ao.AllowReauth = true
@ -87,6 +125,7 @@ func (c *AccessConfig) Prepare(ctx *interpolate.Context) []error {
{&c.TenantName, &ao.TenantName}, {&c.TenantName, &ao.TenantName},
{&c.DomainID, &ao.DomainID}, {&c.DomainID, &ao.DomainID},
{&c.DomainName, &ao.DomainName}, {&c.DomainName, &ao.DomainName},
{&c.Token, &ao.TokenID},
} }
for _, s := range overrides { for _, s := range overrides {
if *s.From != "" { if *s.From != "" {
@ -132,7 +171,7 @@ func (c *AccessConfig) Prepare(ctx *interpolate.Context) []error {
client.HTTPClient.Transport = transport client.HTTPClient.Transport = transport
// Auth // Auth
err = openstack.Authenticate(client, ao) err = openstack.Authenticate(client, *ao)
if err != nil { if err != nil {
return []error{err} return []error{err}
} }

View File

@ -1,6 +1,7 @@
package oci package oci
import ( import (
"context"
"fmt" "fmt"
"github.com/oracle/oci-go-sdk/core" "github.com/oracle/oci-go-sdk/core"
@ -41,11 +42,12 @@ func (a *Artifact) String() string {
) )
} }
// State ...
func (a *Artifact) State(name string) interface{} { func (a *Artifact) State(name string) interface{} {
return nil return nil
} }
// Destroy deletes the custom image associated with the artifact. // Destroy deletes the custom image associated with the artifact.
func (a *Artifact) Destroy() error { func (a *Artifact) Destroy() error {
return a.driver.DeleteImage(*a.Image.Id) return a.driver.DeleteImage(context.TODO(), *a.Image.Id)
} }

View File

@ -58,6 +58,11 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
}, },
&stepCreateInstance{}, &stepCreateInstance{},
&stepInstanceInfo{}, &stepInstanceInfo{},
&stepGetDefaultCredentials{
Debug: b.config.PackerDebug,
Comm: &b.config.Comm,
BuildName: b.config.PackerBuildName,
},
&communicator.StepConnect{ &communicator.StepConnect{
Config: &b.config.Comm, Config: &b.config.Comm,
Host: ocommon.CommHost, Host: ocommon.CommHost,

View File

@ -44,6 +44,10 @@ type Config struct {
BaseImageID string `mapstructure:"base_image_ocid"` BaseImageID string `mapstructure:"base_image_ocid"`
Shape string `mapstructure:"shape"` Shape string `mapstructure:"shape"`
ImageName string `mapstructure:"image_name"` ImageName string `mapstructure:"image_name"`
// Instance
InstanceName string `mapstructure:"instance_name"`
// UserData and UserDataFile file are both optional and mutually exclusive. // UserData and UserDataFile file are both optional and mutually exclusive.
UserData string `mapstructure:"user_data"` UserData string `mapstructure:"user_data"`
UserDataFile string `mapstructure:"user_data_file"` UserDataFile string `mapstructure:"user_data_file"`

View File

@ -1,14 +1,18 @@
package oci package oci
import "github.com/oracle/oci-go-sdk/core" import (
"context"
"github.com/oracle/oci-go-sdk/core"
)
// Driver interfaces between the builder steps and the OCI SDK. // Driver interfaces between the builder steps and the OCI SDK.
type Driver interface { type Driver interface {
CreateInstance(publicKey string) (string, error) CreateInstance(ctx context.Context, publicKey string) (string, error)
CreateImage(id string) (core.Image, error) CreateImage(ctx context.Context, id string) (core.Image, error)
DeleteImage(id string) error DeleteImage(ctx context.Context, id string) error
GetInstanceIP(id string) (string, error) GetInstanceIP(ctx context.Context, id string) (string, error)
TerminateInstance(id string) error TerminateInstance(ctx context.Context, id string) error
WaitForImageCreation(id string) error WaitForImageCreation(ctx context.Context, id string) error
WaitForInstanceState(id string, waitStates []string, terminalState string) error WaitForInstanceState(ctx context.Context, id string, waitStates []string, terminalState string) error
} }

View File

@ -1,6 +1,10 @@
package oci package oci
import "github.com/oracle/oci-go-sdk/core" import (
"context"
"github.com/oracle/oci-go-sdk/core"
)
// driverMock implements the Driver interface and communicates with Oracle // driverMock implements the Driver interface and communicates with Oracle
// OCI. // OCI.
@ -27,7 +31,7 @@ type driverMock struct {
} }
// CreateInstance creates a new compute instance. // CreateInstance creates a new compute instance.
func (d *driverMock) CreateInstance(publicKey string) (string, error) { func (d *driverMock) CreateInstance(ctx context.Context, publicKey string) (string, error) {
if d.CreateInstanceErr != nil { if d.CreateInstanceErr != nil {
return "", d.CreateInstanceErr return "", d.CreateInstanceErr
} }
@ -38,7 +42,7 @@ func (d *driverMock) CreateInstance(publicKey string) (string, error) {
} }
// CreateImage creates a new custom image. // CreateImage creates a new custom image.
func (d *driverMock) CreateImage(id string) (core.Image, error) { func (d *driverMock) CreateImage(ctx context.Context, id string) (core.Image, error) {
if d.CreateImageErr != nil { if d.CreateImageErr != nil {
return core.Image{}, d.CreateImageErr return core.Image{}, d.CreateImageErr
} }
@ -47,7 +51,7 @@ func (d *driverMock) CreateImage(id string) (core.Image, error) {
} }
// DeleteImage mocks deleting a custom image. // DeleteImage mocks deleting a custom image.
func (d *driverMock) DeleteImage(id string) error { func (d *driverMock) DeleteImage(ctx context.Context, id string) error {
if d.DeleteImageErr != nil { if d.DeleteImageErr != nil {
return d.DeleteImageErr return d.DeleteImageErr
} }
@ -58,7 +62,7 @@ func (d *driverMock) DeleteImage(id string) error {
} }
// GetInstanceIP returns the public or private IP corresponding to the given instance id. // GetInstanceIP returns the public or private IP corresponding to the given instance id.
func (d *driverMock) GetInstanceIP(id string) (string, error) { func (d *driverMock) GetInstanceIP(ctx context.Context, id string) (string, error) {
if d.GetInstanceIPErr != nil { if d.GetInstanceIPErr != nil {
return "", d.GetInstanceIPErr return "", d.GetInstanceIPErr
} }
@ -69,7 +73,7 @@ func (d *driverMock) GetInstanceIP(id string) (string, error) {
} }
// TerminateInstance terminates a compute instance. // TerminateInstance terminates a compute instance.
func (d *driverMock) TerminateInstance(id string) error { func (d *driverMock) TerminateInstance(ctx context.Context, id string) error {
if d.TerminateInstanceErr != nil { if d.TerminateInstanceErr != nil {
return d.TerminateInstanceErr return d.TerminateInstanceErr
} }
@ -81,12 +85,12 @@ func (d *driverMock) TerminateInstance(id string) error {
// WaitForImageCreation waits for a provisioning custom image to reach the // WaitForImageCreation waits for a provisioning custom image to reach the
// "AVAILABLE" state. // "AVAILABLE" state.
func (d *driverMock) WaitForImageCreation(id string) error { func (d *driverMock) WaitForImageCreation(ctx context.Context, id string) error {
return d.WaitForImageCreationErr return d.WaitForImageCreationErr
} }
// WaitForInstanceState waits for an instance to reach the a given terminal // WaitForInstanceState waits for an instance to reach the a given terminal
// state. // state.
func (d *driverMock) WaitForInstanceState(id string, waitStates []string, terminalState string) error { func (d *driverMock) WaitForInstanceState(ctx context.Context, id string, waitStates []string, terminalState string) error {
return d.WaitForInstanceStateErr return d.WaitForInstanceStateErr
} }

View File

@ -15,6 +15,7 @@ type driverOCI struct {
computeClient core.ComputeClient computeClient core.ComputeClient
vcnClient core.VirtualNetworkClient vcnClient core.VirtualNetworkClient
cfg *Config cfg *Config
context context.Context
} }
// NewDriverOCI Creates a new driverOCI with a connected compute client and a connected vcn client. // NewDriverOCI Creates a new driverOCI with a connected compute client and a connected vcn client.
@ -37,7 +38,7 @@ func NewDriverOCI(cfg *Config) (Driver, error) {
} }
// CreateInstance creates a new compute instance. // CreateInstance creates a new compute instance.
func (d *driverOCI) CreateInstance(publicKey string) (string, error) { func (d *driverOCI) CreateInstance(ctx context.Context, publicKey string) (string, error) {
metadata := map[string]string{ metadata := map[string]string{
"ssh_authorized_keys": publicKey, "ssh_authorized_keys": publicKey,
} }
@ -45,14 +46,21 @@ func (d *driverOCI) CreateInstance(publicKey string) (string, error) {
metadata["user_data"] = d.cfg.UserData metadata["user_data"] = d.cfg.UserData
} }
instance, err := d.computeClient.LaunchInstance(context.TODO(), core.LaunchInstanceRequest{LaunchInstanceDetails: core.LaunchInstanceDetails{ instanceDetails := core.LaunchInstanceDetails{
AvailabilityDomain: &d.cfg.AvailabilityDomain, AvailabilityDomain: &d.cfg.AvailabilityDomain,
CompartmentId: &d.cfg.CompartmentID, CompartmentId: &d.cfg.CompartmentID,
ImageId: &d.cfg.BaseImageID, ImageId: &d.cfg.BaseImageID,
Shape: &d.cfg.Shape, Shape: &d.cfg.Shape,
SubnetId: &d.cfg.SubnetID, SubnetId: &d.cfg.SubnetID,
Metadata: metadata, Metadata: metadata,
}}) }
// When empty, the default display name is used.
if d.cfg.InstanceName != "" {
instanceDetails.DisplayName = &d.cfg.InstanceName
}
instance, err := d.computeClient.LaunchInstance(context.TODO(), core.LaunchInstanceRequest{LaunchInstanceDetails: instanceDetails})
if err != nil { if err != nil {
return "", err return "", err
@ -62,8 +70,8 @@ func (d *driverOCI) CreateInstance(publicKey string) (string, error) {
} }
// CreateImage creates a new custom image. // CreateImage creates a new custom image.
func (d *driverOCI) CreateImage(id string) (core.Image, error) { func (d *driverOCI) CreateImage(ctx context.Context, id string) (core.Image, error) {
res, err := d.computeClient.CreateImage(context.TODO(), core.CreateImageRequest{CreateImageDetails: core.CreateImageDetails{ res, err := d.computeClient.CreateImage(ctx, core.CreateImageRequest{CreateImageDetails: core.CreateImageDetails{
CompartmentId: &d.cfg.CompartmentID, CompartmentId: &d.cfg.CompartmentID,
InstanceId: &id, InstanceId: &id,
DisplayName: &d.cfg.ImageName, DisplayName: &d.cfg.ImageName,
@ -77,14 +85,14 @@ func (d *driverOCI) CreateImage(id string) (core.Image, error) {
} }
// DeleteImage deletes a custom image. // DeleteImage deletes a custom image.
func (d *driverOCI) DeleteImage(id string) error { func (d *driverOCI) DeleteImage(ctx context.Context, id string) error {
_, err := d.computeClient.DeleteImage(context.TODO(), core.DeleteImageRequest{ImageId: &id}) _, err := d.computeClient.DeleteImage(ctx, core.DeleteImageRequest{ImageId: &id})
return err return err
} }
// GetInstanceIP returns the public or private IP corresponding to the given instance id. // GetInstanceIP returns the public or private IP corresponding to the given instance id.
func (d *driverOCI) GetInstanceIP(id string) (string, error) { func (d *driverOCI) GetInstanceIP(ctx context.Context, id string) (string, error) {
vnics, err := d.computeClient.ListVnicAttachments(context.TODO(), core.ListVnicAttachmentsRequest{ vnics, err := d.computeClient.ListVnicAttachments(ctx, core.ListVnicAttachmentsRequest{
InstanceId: &id, InstanceId: &id,
CompartmentId: &d.cfg.CompartmentID, CompartmentId: &d.cfg.CompartmentID,
}) })
@ -96,7 +104,7 @@ func (d *driverOCI) GetInstanceIP(id string) (string, error) {
return "", errors.New("instance has zero VNICs") return "", errors.New("instance has zero VNICs")
} }
vnic, err := d.vcnClient.GetVnic(context.TODO(), core.GetVnicRequest{VnicId: vnics.Items[0].VnicId}) vnic, err := d.vcnClient.GetVnic(ctx, core.GetVnicRequest{VnicId: vnics.Items[0].VnicId})
if err != nil { if err != nil {
return "", fmt.Errorf("Error getting VNIC details: %s", err) return "", fmt.Errorf("Error getting VNIC details: %s", err)
} }
@ -112,9 +120,20 @@ func (d *driverOCI) GetInstanceIP(id string) (string, error) {
return *vnic.PublicIp, nil return *vnic.PublicIp, nil
} }
func (d *driverOCI) GetInstanceInitialCredentials(ctx context.Context, id string) (string, string, error) {
credentials, err := d.computeClient.GetWindowsInstanceInitialCredentials(ctx, core.GetWindowsInstanceInitialCredentialsRequest{
InstanceId: &id,
})
if err != nil {
return "", "", err
}
return *credentials.InstanceCredentials.Username, *credentials.InstanceCredentials.Password, err
}
// TerminateInstance terminates a compute instance. // TerminateInstance terminates a compute instance.
func (d *driverOCI) TerminateInstance(id string) error { func (d *driverOCI) TerminateInstance(ctx context.Context, id string) error {
_, err := d.computeClient.TerminateInstance(context.TODO(), core.TerminateInstanceRequest{ _, err := d.computeClient.TerminateInstance(ctx, core.TerminateInstanceRequest{
InstanceId: &id, InstanceId: &id,
}) })
return err return err
@ -122,10 +141,10 @@ func (d *driverOCI) TerminateInstance(id string) error {
// WaitForImageCreation waits for a provisioning custom image to reach the // WaitForImageCreation waits for a provisioning custom image to reach the
// "AVAILABLE" state. // "AVAILABLE" state.
func (d *driverOCI) WaitForImageCreation(id string) error { func (d *driverOCI) WaitForImageCreation(ctx context.Context, id string) error {
return waitForResourceToReachState( return waitForResourceToReachState(
func(string) (string, error) { func(string) (string, error) {
image, err := d.computeClient.GetImage(context.TODO(), core.GetImageRequest{ImageId: &id}) image, err := d.computeClient.GetImage(ctx, core.GetImageRequest{ImageId: &id})
if err != nil { if err != nil {
return "", err return "", err
} }
@ -141,10 +160,10 @@ func (d *driverOCI) WaitForImageCreation(id string) error {
// WaitForInstanceState waits for an instance to reach the a given terminal // WaitForInstanceState waits for an instance to reach the a given terminal
// state. // state.
func (d *driverOCI) WaitForInstanceState(id string, waitStates []string, terminalState string) error { func (d *driverOCI) WaitForInstanceState(ctx context.Context, id string, waitStates []string, terminalState string) error {
return waitForResourceToReachState( return waitForResourceToReachState(
func(string) (string, error) { func(string) (string, error) {
instance, err := d.computeClient.GetInstance(context.TODO(), core.GetInstanceRequest{InstanceId: &id}) instance, err := d.computeClient.GetInstance(ctx, core.GetInstanceRequest{InstanceId: &id})
if err != nil { if err != nil {
return "", err return "", err
} }

View File

@ -10,7 +10,7 @@ import (
type stepCreateInstance struct{} type stepCreateInstance struct{}
func (s *stepCreateInstance) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { func (s *stepCreateInstance) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction {
var ( var (
driver = state.Get("driver").(Driver) driver = state.Get("driver").(Driver)
ui = state.Get("ui").(packer.Ui) ui = state.Get("ui").(packer.Ui)
@ -19,7 +19,7 @@ func (s *stepCreateInstance) Run(_ context.Context, state multistep.StateBag) mu
ui.Say("Creating instance...") ui.Say("Creating instance...")
instanceID, err := driver.CreateInstance(publicKey) instanceID, err := driver.CreateInstance(ctx, publicKey)
if err != nil { if err != nil {
err = fmt.Errorf("Problem creating instance: %s", err) err = fmt.Errorf("Problem creating instance: %s", err)
ui.Error(err.Error()) ui.Error(err.Error())
@ -33,7 +33,7 @@ func (s *stepCreateInstance) Run(_ context.Context, state multistep.StateBag) mu
ui.Say("Waiting for instance to enter 'RUNNING' state...") ui.Say("Waiting for instance to enter 'RUNNING' state...")
if err = driver.WaitForInstanceState(instanceID, []string{"STARTING", "PROVISIONING"}, "RUNNING"); err != nil { if err = driver.WaitForInstanceState(ctx, instanceID, []string{"STARTING", "PROVISIONING"}, "RUNNING"); err != nil {
err = fmt.Errorf("Error waiting for instance to start: %s", err) err = fmt.Errorf("Error waiting for instance to start: %s", err)
ui.Error(err.Error()) ui.Error(err.Error())
state.Put("error", err) state.Put("error", err)
@ -57,14 +57,14 @@ func (s *stepCreateInstance) Cleanup(state multistep.StateBag) {
ui.Say(fmt.Sprintf("Terminating instance (%s)...", id)) ui.Say(fmt.Sprintf("Terminating instance (%s)...", id))
if err := driver.TerminateInstance(id); err != nil { if err := driver.TerminateInstance(context.TODO(), id); err != nil {
err = fmt.Errorf("Error terminating instance. Please terminate manually: %s", err) err = fmt.Errorf("Error terminating instance. Please terminate manually: %s", err)
ui.Error(err.Error()) ui.Error(err.Error())
state.Put("error", err) state.Put("error", err)
return return
} }
err := driver.WaitForInstanceState(id, []string{"TERMINATING"}, "TERMINATED") err := driver.WaitForInstanceState(context.TODO(), id, []string{"TERMINATING"}, "TERMINATED")
if err != nil { if err != nil {
err = fmt.Errorf("Error terminating instance. Please terminate manually: %s", err) err = fmt.Errorf("Error terminating instance. Please terminate manually: %s", err)
ui.Error(err.Error()) ui.Error(err.Error())

View File

@ -0,0 +1,62 @@
package oci
import (
"context"
"fmt"
"log"
commonhelper "github.com/hashicorp/packer/helper/common"
"github.com/hashicorp/packer/helper/communicator"
"github.com/hashicorp/packer/helper/multistep"
"github.com/hashicorp/packer/packer"
)
type stepGetDefaultCredentials struct {
Debug bool
Comm *communicator.Config
BuildName string
}
func (s *stepGetDefaultCredentials) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction {
var (
driver = state.Get("driver").(*driverOCI)
ui = state.Get("ui").(packer.Ui)
id = state.Get("instance_id").(string)
)
// Skip if we're not using winrm
if s.Comm.Type != "winrm" {
log.Printf("[INFO] Not using winrm communicator, skipping get password...")
return multistep.ActionContinue
}
// If we already have a password, skip it
if s.Comm.WinRMPassword != "" {
ui.Say("Skipping waiting for password since WinRM password set...")
return multistep.ActionContinue
}
username, password, err := driver.GetInstanceInitialCredentials(ctx, id)
if err != nil {
err = fmt.Errorf("Error getting instance's credentials: %s", err)
ui.Error(err.Error())
state.Put("error", err)
return multistep.ActionHalt
}
s.Comm.WinRMPassword = password
s.Comm.WinRMUser = username
if s.Debug {
ui.Message(fmt.Sprintf(
"[DEBUG] (OCI default credentials): Credentials (since debug is enabled): %s", password))
}
// store so that we can access this later during provisioning
commonhelper.SetSharedState("winrm_password", s.Comm.WinRMPassword, s.BuildName)
return multistep.ActionContinue
}
func (s *stepGetDefaultCredentials) Cleanup(state multistep.StateBag) {
// no cleanup
}

View File

@ -10,7 +10,7 @@ import (
type stepImage struct{} type stepImage struct{}
func (s *stepImage) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { func (s *stepImage) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction {
var ( var (
driver = state.Get("driver").(Driver) driver = state.Get("driver").(Driver)
ui = state.Get("ui").(packer.Ui) ui = state.Get("ui").(packer.Ui)
@ -19,7 +19,7 @@ func (s *stepImage) Run(_ context.Context, state multistep.StateBag) multistep.S
ui.Say("Creating image from instance...") ui.Say("Creating image from instance...")
image, err := driver.CreateImage(instanceID) image, err := driver.CreateImage(ctx, instanceID)
if err != nil { if err != nil {
err = fmt.Errorf("Error creating image from instance: %s", err) err = fmt.Errorf("Error creating image from instance: %s", err)
ui.Error(err.Error()) ui.Error(err.Error())
@ -27,7 +27,7 @@ func (s *stepImage) Run(_ context.Context, state multistep.StateBag) multistep.S
return multistep.ActionHalt return multistep.ActionHalt
} }
err = driver.WaitForImageCreation(*image.Id) err = driver.WaitForImageCreation(ctx, *image.Id)
if err != nil { if err != nil {
err = fmt.Errorf("Error waiting for image creation to finish: %s", err) err = fmt.Errorf("Error waiting for image creation to finish: %s", err)
ui.Error(err.Error()) ui.Error(err.Error())

View File

@ -10,14 +10,14 @@ import (
type stepInstanceInfo struct{} type stepInstanceInfo struct{}
func (s *stepInstanceInfo) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { func (s *stepInstanceInfo) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction {
var ( var (
driver = state.Get("driver").(Driver) driver = state.Get("driver").(Driver)
ui = state.Get("ui").(packer.Ui) ui = state.Get("ui").(packer.Ui)
id = state.Get("instance_id").(string) id = state.Get("instance_id").(string)
) )
ip, err := driver.GetInstanceIP(id) ip, err := driver.GetInstanceIP(ctx, id)
if err != nil { if err != nil {
err = fmt.Errorf("Error getting instance's IP: %s", err) err = fmt.Errorf("Error getting instance's IP: %s", err)
ui.Error(err.Error()) ui.Error(err.Error())

View File

@ -12,7 +12,7 @@ func testConfig() map[string]interface{} {
"api_access_key": "foo", "api_access_key": "foo",
"api_token": "bar", "api_token": "bar",
"region": "ams1", "region": "ams1",
"commercial_type": "VC1S", "commercial_type": "START1-S",
"ssh_username": "root", "ssh_username": "root",
"image": "image-uuid", "image": "image-uuid",
} }
@ -98,7 +98,7 @@ func TestBuilderPrepare_CommercialType(t *testing.T) {
t.Fatalf("should error") t.Fatalf("should error")
} }
expected := "VC1S" expected := "START1-S"
config["commercial_type"] = expected config["commercial_type"] = expected
b = Builder{} b = Builder{}

View File

@ -94,7 +94,7 @@ func (s *StepDownloadGuestAdditions) Run(ctx context.Context, state multistep.St
} else { } else {
ui.Error(err.Error()) ui.Error(err.Error())
url = fmt.Sprintf( url = fmt.Sprintf(
"http://download.virtualbox.org/virtualbox/%s/%s", "https://download.virtualbox.org/virtualbox/%s/%s",
version, version,
additionsName) additionsName)
} }
@ -150,7 +150,7 @@ func (s *StepDownloadGuestAdditions) downloadAdditionsSHA256(ctx context.Context
// First things first, we get the list of checksums for the files available // First things first, we get the list of checksums for the files available
// for this version. // for this version.
checksumsUrl := fmt.Sprintf( checksumsUrl := fmt.Sprintf(
"http://download.virtualbox.org/virtualbox/%s/SHA256SUMS", "https://download.virtualbox.org/virtualbox/%s/SHA256SUMS",
additionsVersion) additionsVersion)
checksumsFile, err := ioutil.TempFile("", "packer") checksumsFile, err := ioutil.TempFile("", "packer")

View File

@ -3,6 +3,7 @@ package iso
import ( import (
"context" "context"
"fmt" "fmt"
"path/filepath"
vboxcommon "github.com/hashicorp/packer/builder/virtualbox/common" vboxcommon "github.com/hashicorp/packer/builder/virtualbox/common"
"github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/helper/multistep"
@ -34,6 +35,16 @@ func (s *stepAttachISO) Run(_ context.Context, state multistep.StateBag) multist
device = "0" device = "0"
} }
// If it's a symlink, resolve it to it's target.
resolvedIsoPath, err := filepath.EvalSymlinks(isoPath)
if err != nil {
err := fmt.Errorf("Error resolving symlink for ISO: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
isoPath = resolvedIsoPath
// Attach the disk to the controller // Attach the disk to the controller
command := []string{ command := []string{
"storageattach", vmName, "storageattach", vmName,

View File

@ -358,7 +358,8 @@ func (d *VmwareDriver) GuestIP(state multistep.StateBag) (string, error) {
// open up the lease and read its contents // open up the lease and read its contents
fh, err := os.Open(dhcpLeasesPath) fh, err := os.Open(dhcpLeasesPath)
if err != nil { if err != nil {
return "", err log.Printf("Error while reading DHCP lease path file %s: %s", dhcpLeasesPath, err.Error())
continue
} }
defer fh.Close() defer fh.Close()

View File

@ -212,11 +212,13 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) {
} }
} }
if b.config.Format != "" { if b.config.Format == "" {
if !(b.config.Format == "ova" || b.config.Format == "ovf" || b.config.Format == "vmx") { b.config.Format = "ovf"
errs = packer.MultiErrorAppend(errs, }
fmt.Errorf("format must be one of ova, ovf, or vmx"))
} if !(b.config.Format == "ova" || b.config.Format == "ovf" || b.config.Format == "vmx") {
errs = packer.MultiErrorAppend(errs,
fmt.Errorf("format must be one of ova, ovf, or vmx"))
} }
// Warnings // Warnings
@ -256,7 +258,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
exportOutputPath := b.config.OutputDir exportOutputPath := b.config.OutputDir
if b.config.RemoteType != "" && b.config.Format != "" { if b.config.RemoteType != "" {
b.config.OutputDir = b.config.VMName b.config.OutputDir = b.config.VMName
} }
dir.SetOutputDir(b.config.OutputDir) dir.SetOutputDir(b.config.OutputDir)

View File

@ -45,8 +45,8 @@ func (s *StepExport) Run(_ context.Context, state multistep.StateBag) multistep.
return multistep.ActionContinue return multistep.ActionContinue
} }
if c.RemoteType != "esx5" || s.Format == "" { if c.RemoteType != "esx5" {
ui.Say("Skipping export of virtual machine (export is allowed only for ESXi and the format needs to be specified)...") ui.Say("Skipping export of virtual machine (export is allowed only for ESXi)...")
return multistep.ActionContinue return multistep.ActionContinue
} }

View File

@ -56,6 +56,7 @@ import (
dockersavepostprocessor "github.com/hashicorp/packer/post-processor/docker-save" dockersavepostprocessor "github.com/hashicorp/packer/post-processor/docker-save"
dockertagpostprocessor "github.com/hashicorp/packer/post-processor/docker-tag" dockertagpostprocessor "github.com/hashicorp/packer/post-processor/docker-tag"
googlecomputeexportpostprocessor "github.com/hashicorp/packer/post-processor/googlecompute-export" googlecomputeexportpostprocessor "github.com/hashicorp/packer/post-processor/googlecompute-export"
googlecomputeimportpostprocessor "github.com/hashicorp/packer/post-processor/googlecompute-import"
manifestpostprocessor "github.com/hashicorp/packer/post-processor/manifest" manifestpostprocessor "github.com/hashicorp/packer/post-processor/manifest"
shelllocalpostprocessor "github.com/hashicorp/packer/post-processor/shell-local" shelllocalpostprocessor "github.com/hashicorp/packer/post-processor/shell-local"
vagrantpostprocessor "github.com/hashicorp/packer/post-processor/vagrant" vagrantpostprocessor "github.com/hashicorp/packer/post-processor/vagrant"
@ -146,6 +147,7 @@ var PostProcessors = map[string]packer.PostProcessor{
"docker-save": new(dockersavepostprocessor.PostProcessor), "docker-save": new(dockersavepostprocessor.PostProcessor),
"docker-tag": new(dockertagpostprocessor.PostProcessor), "docker-tag": new(dockertagpostprocessor.PostProcessor),
"googlecompute-export": new(googlecomputeexportpostprocessor.PostProcessor), "googlecompute-export": new(googlecomputeexportpostprocessor.PostProcessor),
"googlecompute-import": new(googlecomputeimportpostprocessor.PostProcessor),
"manifest": new(manifestpostprocessor.PostProcessor), "manifest": new(manifestpostprocessor.PostProcessor),
"shell-local": new(shelllocalpostprocessor.PostProcessor), "shell-local": new(shelllocalpostprocessor.PostProcessor),
"vagrant": new(vagrantpostprocessor.PostProcessor), "vagrant": new(vagrantpostprocessor.PostProcessor),

View File

@ -1,3 +1,5 @@
// Code generated by pigeon; DO NOT EDIT.
package bootcommand package bootcommand
import ( import (

View File

@ -278,20 +278,24 @@ func (d *HTTPDownloader) Download(dst *os.File, src *url.URL) error {
} }
resp, err := httpClient.Do(req) resp, err := httpClient.Do(req)
if err == nil && (resp.StatusCode >= 200 && resp.StatusCode < 300) { if err != nil {
// If the HEAD request succeeded, then attempt to set the range log.Printf("[DEBUG] (download) Error making HTTP HEAD request: %s", err.Error())
// query if we can. } else {
if resp.Header.Get("Accept-Ranges") == "bytes" { if resp.StatusCode >= 200 && resp.StatusCode < 300 {
if fi, err := dst.Stat(); err == nil { // If the HEAD request succeeded, then attempt to set the range
if _, err = dst.Seek(0, os.SEEK_END); err == nil { // query if we can.
req.Header.Set("Range", fmt.Sprintf("bytes=%d-", fi.Size())) if resp.Header.Get("Accept-Ranges") == "bytes" {
if fi, err := dst.Stat(); err == nil {
if _, err = dst.Seek(0, os.SEEK_END); err == nil {
req.Header.Set("Range", fmt.Sprintf("bytes=%d-", fi.Size()))
d.current = uint64(fi.Size()) d.current = uint64(fi.Size())
}
} }
} }
} else {
log.Printf("[DEBUG] (download) Unexpected HTTP response during HEAD request: %s", resp.Status)
} }
} else if err != nil || (resp.StatusCode >= 400 && resp.StatusCode < 600) {
return fmt.Errorf("%s", resp.Status)
} }
// Set the request to GET now, and redo the query to download // Set the request to GET now, and redo the query to download
@ -300,8 +304,10 @@ func (d *HTTPDownloader) Download(dst *os.File, src *url.URL) error {
resp, err = httpClient.Do(req) resp, err = httpClient.Do(req)
if err != nil { if err != nil {
return err return err
} else if err != nil || (resp.StatusCode >= 400 && resp.StatusCode < 600) { } else {
return fmt.Errorf("%s", resp.Status) if resp.StatusCode >= 400 && resp.StatusCode < 600 {
return fmt.Errorf("Error making HTTP GET request: %s", resp.Status)
}
} }
d.total = d.current + uint64(resp.ContentLength) d.total = d.current + uint64(resp.ContentLength)

View File

@ -518,8 +518,14 @@ Hyper-V\Set-VMNetworkAdapter -VMName $vmName -MacAddressSpoofing $enableMacSpoof
func SetVirtualMachineSecureBoot(vmName string, enableSecureBoot bool, templateName string) error { func SetVirtualMachineSecureBoot(vmName string, enableSecureBoot bool, templateName string) error {
var script = ` var script = `
param([string]$vmName, $enableSecureBoot) param([string]$vmName, [string]$enableSecureBootString, [string]$templateName)
Hyper-V\Set-VMFirmware -VMName $vmName -EnableSecureBoot $enableSecureBoot $cmdlet = Get-Command Hyper-V\Set-VMFirmware
# The SecureBootTemplate parameter is only available in later versions
if ($cmdlet.Parameters.SecureBootTemplate) {
Hyper-V\Set-VMFirmware -VMName $vmName -EnableSecureBoot $enableSecureBootString -SecureBootTemplate $templateName
} else {
Hyper-V\Set-VMFirmware -VMName $vmName -EnableSecureBoot $enableSecureBootString
}
` `
var ps powershell.PowerShellCmd var ps powershell.PowerShellCmd
@ -1009,7 +1015,11 @@ param([string]$mac, [int]$addressIndex)
try { try {
$vm = Hyper-V\Get-VM | ?{$_.NetworkAdapters.MacAddress -eq $mac} $vm = Hyper-V\Get-VM | ?{$_.NetworkAdapters.MacAddress -eq $mac}
if ($vm.NetworkAdapters.IpAddresses) { if ($vm.NetworkAdapters.IpAddresses) {
$ip = $vm.NetworkAdapters.IpAddresses[$addressIndex] $ipAddresses = $vm.NetworkAdapters.IPAddresses
if ($ipAddresses -isnot [array]) {
$ipAddresses = @($ipAddresses)
}
$ip = $ipAddresses[$addressIndex]
} else { } else {
$vm_info = Get-CimInstance -ClassName Msvm_ComputerSystem -Namespace root\virtualization\v2 -Filter "ElementName='$($vm.Name)'" $vm_info = Get-CimInstance -ClassName Msvm_ComputerSystem -Namespace root\virtualization\v2 -Filter "ElementName='$($vm.Name)'"
$ip_details = (Get-CimAssociatedInstance -InputObject $vm_info -ResultClassName Msvm_KvpExchangeComponent).GuestIntrinsicExchangeItems | %{ [xml]$_ } | ?{ $_.SelectSingleNode("/INSTANCE/PROPERTY[@NAME='Name']/VALUE[child::text()='NetworkAddressIPv4']") } $ip_details = (Get-CimAssociatedInstance -InputObject $vm_info -ResultClassName Msvm_KvpExchangeComponent).GuestIntrinsicExchangeItems | %{ [xml]$_ } | ?{ $_.SelectSingleNode("/INSTANCE/PROPERTY[@NAME='Name']/VALUE[child::text()='NetworkAddressIPv4']") }

View File

@ -1,36 +1,27 @@
package shell package shell_local
import ( import (
"fmt" "fmt"
"io" "io"
"log"
"os" "os"
"os/exec" "os/exec"
"syscall" "syscall"
"github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/packer"
"github.com/hashicorp/packer/template/interpolate"
) )
type Communicator struct { type Communicator struct {
ExecuteCommand []string ExecuteCommand []string
Ctx interpolate.Context
} }
func (c *Communicator) Start(cmd *packer.RemoteCmd) error { func (c *Communicator) Start(cmd *packer.RemoteCmd) error {
// Render the template so that we know how to execute the command if len(c.ExecuteCommand) == 0 {
c.Ctx.Data = &ExecuteCommandTemplate{ return fmt.Errorf("Error launching command via shell-local communicator: No ExecuteCommand provided")
Command: cmd.Command,
}
for i, field := range c.ExecuteCommand {
command, err := interpolate.Render(field, &c.Ctx)
if err != nil {
return fmt.Errorf("Error processing command: %s", err)
}
c.ExecuteCommand[i] = command
} }
// Build the local command to execute // Build the local command to execute
log.Printf("[INFO] (shell-local communicator): Executing local shell command %s", c.ExecuteCommand)
localCmd := exec.Command(c.ExecuteCommand[0], c.ExecuteCommand[1:]...) localCmd := exec.Command(c.ExecuteCommand[0], c.ExecuteCommand[1:]...)
localCmd.Stdin = cmd.Stdin localCmd.Stdin = cmd.Stdin
localCmd.Stdout = cmd.Stdout localCmd.Stdout = cmd.Stdout
@ -79,7 +70,3 @@ func (c *Communicator) Download(string, io.Writer) error {
func (c *Communicator) DownloadDir(string, string, []string) error { func (c *Communicator) DownloadDir(string, string, []string) error {
return fmt.Errorf("downloadDir not supported") return fmt.Errorf("downloadDir not supported")
} }
type ExecuteCommandTemplate struct {
Command string
}

View File

@ -19,12 +19,13 @@ func TestCommunicator(t *testing.T) {
return return
} }
c := &Communicator{} c := &Communicator{
ExecuteCommand: []string{"/bin/sh", "-c", "echo foo"},
}
var buf bytes.Buffer var buf bytes.Buffer
cmd := &packer.RemoteCmd{ cmd := &packer.RemoteCmd{
Command: "/bin/echo foo", Stdout: &buf,
Stdout: &buf,
} }
if err := c.Start(cmd); err != nil { if err := c.Start(cmd); err != nil {

View File

@ -0,0 +1,227 @@
package shell_local
import (
"errors"
"fmt"
"os"
"path/filepath"
"runtime"
"strings"
"github.com/hashicorp/packer/common"
configHelper "github.com/hashicorp/packer/helper/config"
"github.com/hashicorp/packer/packer"
"github.com/hashicorp/packer/template/interpolate"
)
type Config struct {
common.PackerConfig `mapstructure:",squash"`
// ** DEPRECATED: USE INLINE INSTEAD **
// ** Only Present for backwards compatibiltiy **
// Command is the command to execute
Command string
// An inline script to execute. Multiple strings are all executed
// in the context of a single shell.
Inline []string
// The shebang value used when running inline scripts.
InlineShebang string `mapstructure:"inline_shebang"`
// The file extension to use for the file generated from the inline commands
TempfileExtension string `mapstructure:"tempfile_extension"`
// The local path of the shell script to upload and execute.
Script string
// An array of multiple scripts to run.
Scripts []string
// An array of environment variables that will be injected before
// your command(s) are executed.
Vars []string `mapstructure:"environment_vars"`
EnvVarFormat string `mapstructure:"env_var_format"`
// End dedupe with postprocessor
// The command used to execute the script. The '{{ .Path }}' variable
// should be used to specify where the script goes, {{ .Vars }}
// can be used to inject the environment_vars into the environment.
ExecuteCommand []string `mapstructure:"execute_command"`
UseLinuxPathing bool `mapstructure:"use_linux_pathing"`
Ctx interpolate.Context
}
func Decode(config *Config, raws ...interface{}) error {
//Create passthrough for winrm password so we can fill it in once we know it
config.Ctx.Data = &EnvVarsTemplate{
WinRMPassword: `{{.WinRMPassword}}`,
}
err := configHelper.Decode(&config, &configHelper.DecodeOpts{
Interpolate: true,
InterpolateContext: &config.Ctx,
InterpolateFilter: &interpolate.RenderFilter{
Exclude: []string{
"execute_command",
},
},
}, raws...)
if err != nil {
return fmt.Errorf("Error decoding config: %s, config is %#v, and raws is %#v", err, config, raws)
}
return nil
}
func Validate(config *Config) error {
var errs *packer.MultiError
if runtime.GOOS == "windows" {
if len(config.ExecuteCommand) == 0 {
config.ExecuteCommand = []string{
"cmd",
"/V",
"/C",
"{{.Vars}}",
"call",
"{{.Script}}",
}
}
} else {
if config.InlineShebang == "" {
config.InlineShebang = "/bin/sh -e"
}
if len(config.ExecuteCommand) == 0 {
config.ExecuteCommand = []string{
"/bin/sh",
"-c",
"{{.Vars}} {{.Script}}",
}
}
}
// Clean up input
if config.Inline != nil && len(config.Inline) == 0 {
config.Inline = make([]string, 0)
}
if config.Scripts == nil {
config.Scripts = make([]string, 0)
}
if config.Vars == nil {
config.Vars = make([]string, 0)
}
// Verify that the user has given us a command to run
if config.Command == "" && len(config.Inline) == 0 &&
len(config.Scripts) == 0 && config.Script == "" {
errs = packer.MultiErrorAppend(errs,
errors.New("Command, Inline, Script and Scripts options cannot all be empty."))
}
// Check that user hasn't given us too many commands to run
tooManyOptionsErr := errors.New("You may only specify one of the " +
"following options: Command, Inline, Script or Scripts. Please" +
" consolidate these options in your config.")
if config.Command != "" {
if len(config.Inline) != 0 || len(config.Scripts) != 0 || config.Script != "" {
errs = packer.MultiErrorAppend(errs, tooManyOptionsErr)
} else {
config.Inline = []string{config.Command}
}
}
if config.Script != "" {
if len(config.Scripts) > 0 || len(config.Inline) > 0 {
errs = packer.MultiErrorAppend(errs, tooManyOptionsErr)
} else {
config.Scripts = []string{config.Script}
}
}
if len(config.Scripts) > 0 && config.Inline != nil {
errs = packer.MultiErrorAppend(errs, tooManyOptionsErr)
}
// Check that all scripts we need to run exist locally
for _, path := range config.Scripts {
if _, err := os.Stat(path); err != nil {
errs = packer.MultiErrorAppend(errs,
fmt.Errorf("Bad script '%s': %s", path, err))
}
}
if config.UseLinuxPathing {
for index, script := range config.Scripts {
scriptAbsPath, err := filepath.Abs(script)
if err != nil {
return fmt.Errorf("Error converting %s to absolute path: %s", script, err.Error())
}
converted, err := ConvertToLinuxPath(scriptAbsPath)
if err != nil {
return err
}
config.Scripts[index] = converted
}
// Interoperability issues with WSL makes creating and running tempfiles
// via golang's os package basically impossible.
if len(config.Inline) > 0 {
errs = packer.MultiErrorAppend(errs,
fmt.Errorf("Packer is unable to use the Command and Inline "+
"features with the Windows Linux Subsystem. Please use "+
"the Script or Scripts options instead"))
}
}
// This is currently undocumented and not a feature users are expected to
// interact with.
if config.EnvVarFormat == "" {
if (runtime.GOOS == "windows") && !config.UseLinuxPathing {
config.EnvVarFormat = "set %s=%s && "
} else {
config.EnvVarFormat = "%s='%s' "
}
}
// drop unnecessary "." in extension; we add this later.
if config.TempfileExtension != "" {
if strings.HasPrefix(config.TempfileExtension, ".") {
config.TempfileExtension = config.TempfileExtension[1:]
}
}
// Do a check for bad environment variables, such as '=foo', 'foobar'
for _, kv := range config.Vars {
vs := strings.SplitN(kv, "=", 2)
if len(vs) != 2 || vs[0] == "" {
errs = packer.MultiErrorAppend(errs,
fmt.Errorf("Environment variable not in format 'key=value': %s", kv))
}
}
if errs != nil && len(errs.Errors) > 0 {
return errs
}
return nil
}
// C:/path/to/your/file becomes /mnt/c/path/to/your/file
func ConvertToLinuxPath(winAbsPath string) (string, error) {
// get absolute path of script, and morph it into the bash path
winAbsPath = strings.Replace(winAbsPath, "\\", "/", -1)
splitPath := strings.SplitN(winAbsPath, ":/", 2)
if len(splitPath) == 2 {
winBashPath := fmt.Sprintf("/mnt/%s/%s", strings.ToLower(splitPath[0]), splitPath[1])
return winBashPath, nil
} else {
err := fmt.Errorf("There was an error splitting your absolute path; expected "+
"to find a drive following the format ':/' but did not: absolute "+
"path: %s", winAbsPath)
return "", err
}
}

View File

@ -0,0 +1,16 @@
package shell_local
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestConvertToLinuxPath(t *testing.T) {
winPath := "C:/path/to/your/file"
winBashPath := "/mnt/c/path/to/your/file"
converted, _ := ConvertToLinuxPath(winPath)
assert.Equal(t, winBashPath, converted,
"Should have converted %s to %s -- not %s", winPath, winBashPath, converted)
}

201
common/shell-local/run.go Normal file
View File

@ -0,0 +1,201 @@
package shell_local
import (
"bufio"
"fmt"
"io/ioutil"
"log"
"os"
"sort"
"strings"
commonhelper "github.com/hashicorp/packer/helper/common"
"github.com/hashicorp/packer/packer"
"github.com/hashicorp/packer/template/interpolate"
)
type ExecuteCommandTemplate struct {
Vars string
Script string
Command string
WinRMPassword string
}
type EnvVarsTemplate struct {
WinRMPassword string
}
func Run(ui packer.Ui, config *Config) (bool, error) {
scripts := make([]string, len(config.Scripts))
if len(config.Scripts) > 0 {
copy(scripts, config.Scripts)
} else if config.Inline != nil {
// If we have an inline script, then turn that into a temporary
// shell script and use that.
tempScriptFileName, err := createInlineScriptFile(config)
if err != nil {
return false, err
}
scripts = append(scripts, tempScriptFileName)
// figure out what extension the file should have, and rename it.
if config.TempfileExtension != "" {
os.Rename(tempScriptFileName, fmt.Sprintf("%s.%s", tempScriptFileName, config.TempfileExtension))
tempScriptFileName = fmt.Sprintf("%s.%s", tempScriptFileName, config.TempfileExtension)
}
defer os.Remove(tempScriptFileName)
}
// Create environment variables to set before executing the command
flattenedEnvVars, err := createFlattenedEnvVars(config)
if err != nil {
return false, err
}
for _, script := range scripts {
interpolatedCmds, err := createInterpolatedCommands(config, script, flattenedEnvVars)
if err != nil {
return false, err
}
ui.Say(fmt.Sprintf("Running local shell script: %s", script))
comm := &Communicator{
ExecuteCommand: interpolatedCmds,
}
// The remoteCmd generated here isn't actually run, but it allows us to
// use the same interafce for the shell-local communicator as we use for
// the other communicators; ultimately, this command is just used for
// buffers and for reading the final exit status.
flattenedCmd := strings.Join(interpolatedCmds, " ")
cmd := &packer.RemoteCmd{Command: flattenedCmd}
sanitized := flattenedCmd
if len(getWinRMPassword(config.PackerBuildName)) > 0 {
sanitized = strings.Replace(flattenedCmd,
getWinRMPassword(config.PackerBuildName), "*****", -1)
}
log.Printf("[INFO] (shell-local): starting local command: %s", sanitized)
if err := cmd.StartWithUi(comm, ui); err != nil {
return false, fmt.Errorf(
"Error executing script: %s\n\n"+
"Please see output above for more information.",
script)
}
if cmd.ExitStatus != 0 {
return false, fmt.Errorf(
"Erroneous exit code %d while executing script: %s\n\n"+
"Please see output above for more information.",
cmd.ExitStatus,
script)
}
}
return true, nil
}
func createInlineScriptFile(config *Config) (string, error) {
tf, err := ioutil.TempFile("", "packer-shell")
if err != nil {
return "", fmt.Errorf("Error preparing shell script: %s", err)
}
defer tf.Close()
// Write our contents to it
writer := bufio.NewWriter(tf)
if config.InlineShebang != "" {
shebang := fmt.Sprintf("#!%s\n", config.InlineShebang)
log.Printf("[INFO] (shell-local): Prepending inline script with %s", shebang)
writer.WriteString(shebang)
}
// generate context so you can interpolate the command
config.Ctx.Data = &EnvVarsTemplate{
WinRMPassword: getWinRMPassword(config.PackerBuildName),
}
for _, command := range config.Inline {
// interpolate command to check for template variables.
command, err := interpolate.Render(command, &config.Ctx)
if err != nil {
return "", err
}
if _, err := writer.WriteString(command + "\n"); err != nil {
return "", fmt.Errorf("Error preparing shell script: %s", err)
}
}
if err := writer.Flush(); err != nil {
return "", fmt.Errorf("Error preparing shell script: %s", err)
}
err = os.Chmod(tf.Name(), 0700)
if err != nil {
log.Printf("[ERROR] (shell-local): error modifying permissions of temp script file: %s", err.Error())
}
return tf.Name(), nil
}
// Generates the final command to send to the communicator, using either the
// user-provided ExecuteCommand or defaulting to something that makes sense for
// the host OS
func createInterpolatedCommands(config *Config, script string, flattenedEnvVars string) ([]string, error) {
config.Ctx.Data = &ExecuteCommandTemplate{
Vars: flattenedEnvVars,
Script: script,
Command: script,
WinRMPassword: getWinRMPassword(config.PackerBuildName),
}
interpolatedCmds := make([]string, len(config.ExecuteCommand))
for i, cmd := range config.ExecuteCommand {
interpolatedCmd, err := interpolate.Render(cmd, &config.Ctx)
if err != nil {
return nil, fmt.Errorf("Error processing command: %s", err)
}
interpolatedCmds[i] = interpolatedCmd
}
return interpolatedCmds, nil
}
func createFlattenedEnvVars(config *Config) (string, error) {
flattened := ""
envVars := make(map[string]string)
// Always available Packer provided env vars
envVars["PACKER_BUILD_NAME"] = fmt.Sprintf("%s", config.PackerBuildName)
envVars["PACKER_BUILDER_TYPE"] = fmt.Sprintf("%s", config.PackerBuilderType)
// interpolate environment variables
config.Ctx.Data = &EnvVarsTemplate{
WinRMPassword: getWinRMPassword(config.PackerBuildName),
}
// Split vars into key/value components
for _, envVar := range config.Vars {
envVar, err := interpolate.Render(envVar, &config.Ctx)
if err != nil {
return "", err
}
// Split vars into key/value components
keyValue := strings.SplitN(envVar, "=", 2)
// Store pair, replacing any single quotes in value so they parse
// correctly with required environment variable format
envVars[keyValue[0]] = strings.Replace(keyValue[1], "'", `'"'"'`, -1)
}
// Create a list of env var keys in sorted order
var keys []string
for k := range envVars {
keys = append(keys, k)
}
sort.Strings(keys)
for _, key := range keys {
flattened += fmt.Sprintf(config.EnvVarFormat, key, envVars[key])
}
return flattened, nil
}
func getWinRMPassword(buildName string) string {
winRMPass, _ := commonhelper.RetrieveSharedState("winrm_password", buildName)
return winRMPass
}

Binary file not shown.

View File

@ -2,8 +2,7 @@
"variables": { "variables": {
"client_id": "{{env `ARM_CLIENT_ID`}}", "client_id": "{{env `ARM_CLIENT_ID`}}",
"client_secret": "{{env `ARM_CLIENT_SECRET`}}", "client_secret": "{{env `ARM_CLIENT_SECRET`}}",
"subscription_id": "{{env `ARM_SUBSCRIPTION_ID`}}", "subscription_id": "{{env `ARM_SUBSCRIPTION_ID`}}"
"object_id": "{{env `ARM_OBJECT_ID`}}"
}, },
"builders": [{ "builders": [{
"type": "azure-arm", "type": "azure-arm",
@ -11,7 +10,6 @@
"client_id": "{{user `client_id`}}", "client_id": "{{user `client_id`}}",
"client_secret": "{{user `client_secret`}}", "client_secret": "{{user `client_secret`}}",
"subscription_id": "{{user `subscription_id`}}", "subscription_id": "{{user `subscription_id`}}",
"object_id": "{{user `object_id`}}",
"managed_image_resource_group_name": "packertest", "managed_image_resource_group_name": "packertest",
"managed_image_name": "MyWindowsOSImage", "managed_image_name": "MyWindowsOSImage",

View File

@ -0,0 +1,36 @@
{
"variables": {
"subscription_id": "{{env `ARM_SUBSCRIPTION_ID`}}"
},
"builders": [{
"type": "azure-arm",
"subscription_id": "{{user `subscription_id`}}",
"managed_image_resource_group_name": "packertest",
"managed_image_name": "MyWindowsOSImage",
"os_type": "Windows",
"image_publisher": "MicrosoftWindowsServer",
"image_offer": "WindowsServer",
"image_sku": "2012-R2-Datacenter",
"communicator": "winrm",
"winrm_use_ssl": "true",
"winrm_insecure": "true",
"winrm_timeout": "3m",
"winrm_username": "packer",
"location": "South Central US",
"vm_size": "Standard_DS2_v2"
}],
"provisioners": [{
"type": "powershell",
"inline": [
"if( Test-Path $Env:SystemRoot\\windows\\system32\\Sysprep\\unattend.xml ){ rm $Env:SystemRoot\\windows\\system32\\Sysprep\\unattend.xml -Force}",
"& $env:SystemRoot\\System32\\Sysprep\\Sysprep.exe /oobe /generalize /quiet /quit",
"while($true) { $imageState = Get-ItemProperty HKLM:\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Setup\\State | Select ImageState; if($imageState.ImageState -ne 'IMAGE_STATE_GENERALIZE_RESEAL_TO_OOBE') { Write-Output $imageState.ImageState; Start-Sleep -s 10 } else { break } }"
]
}]
}

View File

@ -200,10 +200,18 @@ func (b *coreBuild) Run(originalUi Ui, cache Cache) ([]Artifact, error) {
if len(p.config) > 0 { if len(p.config) > 0 {
pConfig = p.config[0] pConfig = p.config[0]
} }
hookedProvisioners[i] = &HookedProvisioner{ if b.debug {
p.provisioner, hookedProvisioners[i] = &HookedProvisioner{
pConfig, &DebuggedProvisioner{Provisioner: p.provisioner},
p.pType, pConfig,
p.pType,
}
} else {
hookedProvisioners[i] = &HookedProvisioner{
p.provisioner,
pConfig,
p.pType,
}
} }
} }

View File

@ -2,6 +2,7 @@ package packer
import ( import (
"fmt" "fmt"
"log"
"sync" "sync"
"time" "time"
) )
@ -168,3 +169,90 @@ func (p *PausedProvisioner) Cancel() {
func (p *PausedProvisioner) provision(result chan<- error, ui Ui, comm Communicator) { func (p *PausedProvisioner) provision(result chan<- error, ui Ui, comm Communicator) {
result <- p.Provisioner.Provision(ui, comm) result <- p.Provisioner.Provision(ui, comm)
} }
// DebuggedProvisioner is a Provisioner implementation that waits until a key
// press before the provisioner is actually run.
type DebuggedProvisioner struct {
Provisioner Provisioner
cancelCh chan struct{}
doneCh chan struct{}
lock sync.Mutex
}
func (p *DebuggedProvisioner) Prepare(raws ...interface{}) error {
return p.Provisioner.Prepare(raws...)
}
func (p *DebuggedProvisioner) Provision(ui Ui, comm Communicator) error {
p.lock.Lock()
cancelCh := make(chan struct{})
p.cancelCh = cancelCh
// Setup the done channel, which is trigger when we're done
doneCh := make(chan struct{})
defer close(doneCh)
p.doneCh = doneCh
p.lock.Unlock()
defer func() {
p.lock.Lock()
defer p.lock.Unlock()
if p.cancelCh == cancelCh {
p.cancelCh = nil
}
if p.doneCh == doneCh {
p.doneCh = nil
}
}()
// Use a select to determine if we get cancelled during the wait
message := "Pausing before the next provisioner . Press enter to continue."
result := make(chan string, 1)
go func() {
line, err := ui.Ask(message)
if err != nil {
log.Printf("Error asking for input: %s", err)
}
result <- line
}()
select {
case <-result:
case <-cancelCh:
return nil
}
provDoneCh := make(chan error, 1)
go p.provision(provDoneCh, ui, comm)
select {
case err := <-provDoneCh:
return err
case <-cancelCh:
p.Provisioner.Cancel()
return <-provDoneCh
}
}
func (p *DebuggedProvisioner) Cancel() {
var doneCh chan struct{}
p.lock.Lock()
if p.cancelCh != nil {
close(p.cancelCh)
p.cancelCh = nil
}
if p.doneCh != nil {
doneCh = p.doneCh
}
p.lock.Unlock()
<-doneCh
}
func (p *DebuggedProvisioner) provision(result chan<- error, ui Ui, comm Communicator) {
result <- p.Provisioner.Provision(ui, comm)
}

View File

@ -197,3 +197,67 @@ func TestPausedProvisionerCancel(t *testing.T) {
t.Fatal("cancel should be called") t.Fatal("cancel should be called")
} }
} }
func TestDebuggedProvisioner_impl(t *testing.T) {
var _ Provisioner = new(DebuggedProvisioner)
}
func TestDebuggedProvisionerPrepare(t *testing.T) {
mock := new(MockProvisioner)
prov := &DebuggedProvisioner{
Provisioner: mock,
}
prov.Prepare(42)
if !mock.PrepCalled {
t.Fatal("prepare should be called")
}
if mock.PrepConfigs[0] != 42 {
t.Fatal("should have proper configs")
}
}
func TestDebuggedProvisionerProvision(t *testing.T) {
mock := new(MockProvisioner)
prov := &DebuggedProvisioner{
Provisioner: mock,
}
ui := testUi()
comm := new(MockCommunicator)
writeReader(ui, "\n")
prov.Provision(ui, comm)
if !mock.ProvCalled {
t.Fatal("prov should be called")
}
if mock.ProvUi != ui {
t.Fatal("should have proper ui")
}
if mock.ProvCommunicator != comm {
t.Fatal("should have proper comm")
}
}
func TestDebuggedProvisionerCancel(t *testing.T) {
mock := new(MockProvisioner)
prov := &DebuggedProvisioner{
Provisioner: mock,
}
provCh := make(chan struct{})
mock.ProvFunc = func() error {
close(provCh)
time.Sleep(10 * time.Millisecond)
return nil
}
// Start provisioning and wait for it to start
go prov.Provision(testUi(), new(MockCommunicator))
<-provCh
// Cancel it
prov.Cancel()
if !mock.CancelCalled {
t.Fatal("cancel should be called")
}
}

View File

@ -0,0 +1,3 @@
* 1.9.6 => GNU tar format
* 1.10.3 w/ patch => GNU tar format
* 1.10.3 w/o patch => Posix tar format

View File

@ -303,6 +303,9 @@ func createTarArchive(files []string, output io.WriteCloser) error {
return fmt.Errorf("Failed to create tar header for %s: %s", path, err) return fmt.Errorf("Failed to create tar header for %s: %s", path, err)
} }
// workaround for archive format on go >=1.10
setHeaderFormat(header)
if err := archive.WriteHeader(header); err != nil { if err := archive.WriteHeader(header); err != nil {
return fmt.Errorf("Failed to write tar header for %s: %s", path, err) return fmt.Errorf("Failed to write tar header for %s: %s", path, err)
} }

View File

@ -0,0 +1,9 @@
// +build !go1.10
package compress
import "archive/tar"
func setHeaderFormat(header *tar.Header) {
// no-op
}

View File

@ -0,0 +1,17 @@
// +build go1.10
package compress
import (
"archive/tar"
"time"
)
func setHeaderFormat(header *tar.Header) {
// We have to set the Format explicitly for the googlecompute-import
// post-processor. Google Cloud only allows importing GNU tar format.
header.Format = tar.FormatGNU
header.AccessTime = time.Time{}
header.ModTime = time.Time{}
header.ChangeTime = time.Time{}
}

View File

@ -0,0 +1,37 @@
package googlecomputeimport
import (
"fmt"
)
const BuilderId = "packer.post-processor.googlecompute-import"
type Artifact struct {
paths []string
}
func (*Artifact) BuilderId() string {
return BuilderId
}
func (*Artifact) Id() string {
return ""
}
func (a *Artifact) Files() []string {
pathsCopy := make([]string, len(a.paths))
copy(pathsCopy, a.paths)
return pathsCopy
}
func (a *Artifact) String() string {
return fmt.Sprintf("Exported artifacts in: %s", a.paths)
}
func (*Artifact) State(name string) interface{} {
return nil
}
func (a *Artifact) Destroy() error {
return nil
}

View File

@ -0,0 +1,15 @@
package googlecomputeimport
import (
"testing"
"github.com/hashicorp/packer/packer"
)
func TestArtifact_ImplementsArtifact(t *testing.T) {
var raw interface{}
raw = &Artifact{}
if _, ok := raw.(packer.Artifact); !ok {
t.Fatalf("Artifact should be a Artifact")
}
}

View File

@ -0,0 +1,235 @@
package googlecomputeimport
import (
"fmt"
"net/http"
"os"
"strings"
"time"
"google.golang.org/api/compute/v1"
"google.golang.org/api/storage/v1"
"github.com/hashicorp/packer/builder/googlecompute"
"github.com/hashicorp/packer/common"
"github.com/hashicorp/packer/helper/config"
"github.com/hashicorp/packer/helper/multistep"
"github.com/hashicorp/packer/packer"
"github.com/hashicorp/packer/post-processor/compress"
"github.com/hashicorp/packer/template/interpolate"
"golang.org/x/oauth2"
"golang.org/x/oauth2/jwt"
)
type Config struct {
common.PackerConfig `mapstructure:",squash"`
Bucket string `mapstructure:"bucket"`
GCSObjectName string `mapstructure:"gcs_object_name"`
ImageDescription string `mapstructure:"image_description"`
ImageFamily string `mapstructure:"image_family"`
ImageLabels map[string]string `mapstructure:"image_labels"`
ImageName string `mapstructure:"image_name"`
ProjectId string `mapstructure:"project_id"`
AccountFile string `mapstructure:"account_file"`
KeepOriginalImage bool `mapstructure:"keep_input_artifact"`
ctx interpolate.Context
}
type PostProcessor struct {
config Config
runner multistep.Runner
}
func (p *PostProcessor) Configure(raws ...interface{}) error {
err := config.Decode(&p.config, &config.DecodeOpts{
Interpolate: true,
InterpolateContext: &p.config.ctx,
InterpolateFilter: &interpolate.RenderFilter{
Exclude: []string{
"gcs_object_name",
},
},
}, raws...)
if err != nil {
return err
}
// Set defaults
if p.config.GCSObjectName == "" {
p.config.GCSObjectName = "packer-import-{{timestamp}}.tar.gz"
}
errs := new(packer.MultiError)
// Check and render gcs_object_name
if err = interpolate.Validate(p.config.GCSObjectName, &p.config.ctx); err != nil {
errs = packer.MultiErrorAppend(
errs, fmt.Errorf("Error parsing gcs_object_name template: %s", err))
}
templates := map[string]*string{
"bucket": &p.config.Bucket,
"image_name": &p.config.ImageName,
"project_id": &p.config.ProjectId,
"account_file": &p.config.AccountFile,
}
for key, ptr := range templates {
if *ptr == "" {
errs = packer.MultiErrorAppend(
errs, fmt.Errorf("%s must be set", key))
}
}
if len(errs.Errors) > 0 {
return errs
}
return nil
}
func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (packer.Artifact, bool, error) {
var err error
if artifact.BuilderId() != compress.BuilderId {
err = fmt.Errorf(
"incompatible artifact type: %s\nCan only import from Compress post-processor artifacts",
artifact.BuilderId())
return nil, false, err
}
p.config.GCSObjectName, err = interpolate.Render(p.config.GCSObjectName, &p.config.ctx)
if err != nil {
return nil, false, fmt.Errorf("Error rendering gcs_object_name template: %s", err)
}
rawImageGcsPath, err := UploadToBucket(p.config.AccountFile, ui, artifact, p.config.Bucket, p.config.GCSObjectName)
if err != nil {
return nil, p.config.KeepOriginalImage, err
}
gceImageArtifact, err := CreateGceImage(p.config.AccountFile, ui, p.config.ProjectId, rawImageGcsPath, p.config.ImageName, p.config.ImageDescription, p.config.ImageFamily, p.config.ImageLabels)
if err != nil {
return nil, p.config.KeepOriginalImage, err
}
return gceImageArtifact, p.config.KeepOriginalImage, nil
}
func UploadToBucket(accountFile string, ui packer.Ui, artifact packer.Artifact, bucket string, gcsObjectName string) (string, error) {
var client *http.Client
var account googlecompute.AccountFile
err := googlecompute.ProcessAccountFile(&account, accountFile)
if err != nil {
return "", err
}
var DriverScopes = []string{"https://www.googleapis.com/auth/devstorage.full_control"}
conf := jwt.Config{
Email: account.ClientEmail,
PrivateKey: []byte(account.PrivateKey),
Scopes: DriverScopes,
TokenURL: "https://accounts.google.com/o/oauth2/token",
}
client = conf.Client(oauth2.NoContext)
service, err := storage.New(client)
if err != nil {
return "", err
}
ui.Say("Looking for tar.gz file in list of artifacts...")
source := ""
for _, path := range artifact.Files() {
ui.Say(fmt.Sprintf("Found artifact %v...", path))
if strings.HasSuffix(path, ".tar.gz") {
source = path
break
}
}
if source == "" {
return "", fmt.Errorf("No tar.gz file found in list of articats")
}
artifactFile, err := os.Open(source)
if err != nil {
err := fmt.Errorf("error opening %v", source)
return "", err
}
ui.Say(fmt.Sprintf("Uploading file %v to GCS bucket %v/%v...", source, bucket, gcsObjectName))
storageObject, err := service.Objects.Insert(bucket, &storage.Object{Name: gcsObjectName}).Media(artifactFile).Do()
if err != nil {
ui.Say(fmt.Sprintf("Failed to upload: %v", storageObject))
return "", err
}
return "https://storage.googleapis.com/" + bucket + "/" + gcsObjectName, nil
}
func CreateGceImage(accountFile string, ui packer.Ui, project string, rawImageURL string, imageName string, imageDescription string, imageFamily string, imageLabels map[string]string) (packer.Artifact, error) {
var client *http.Client
var account googlecompute.AccountFile
err := googlecompute.ProcessAccountFile(&account, accountFile)
if err != nil {
return nil, err
}
var DriverScopes = []string{"https://www.googleapis.com/auth/compute", "https://www.googleapis.com/auth/devstorage.full_control"}
conf := jwt.Config{
Email: account.ClientEmail,
PrivateKey: []byte(account.PrivateKey),
Scopes: DriverScopes,
TokenURL: "https://accounts.google.com/o/oauth2/token",
}
client = conf.Client(oauth2.NoContext)
service, err := compute.New(client)
if err != nil {
return nil, err
}
gceImage := &compute.Image{
Name: imageName,
Description: imageDescription,
Family: imageFamily,
Labels: imageLabels,
RawDisk: &compute.ImageRawDisk{Source: rawImageURL},
SourceType: "RAW",
}
ui.Say(fmt.Sprintf("Creating GCE image %v...", imageName))
op, err := service.Images.Insert(project, gceImage).Do()
if err != nil {
ui.Say("Error creating GCE image")
return nil, err
}
ui.Say("Waiting for GCE image creation operation to complete...")
for op.Status != "DONE" {
op, err = service.GlobalOperations.Get(project, op.Name).Do()
if err != nil {
return nil, err
}
time.Sleep(5 * time.Second)
}
// fail if image creation operation has an error
if op.Error != nil {
var imageError string
for _, error := range op.Error.Errors {
imageError += error.Message
}
err = fmt.Errorf("failed to create GCE image %s: %s", imageName, imageError)
return nil, err
}
return &Artifact{paths: []string{op.TargetLink}}, nil
}

View File

@ -1,63 +0,0 @@
package shell_local
import (
"fmt"
"io"
"os"
"os/exec"
"syscall"
"github.com/hashicorp/packer/packer"
)
type Communicator struct{}
func (c *Communicator) Start(cmd *packer.RemoteCmd) error {
localCmd := exec.Command("sh", "-c", cmd.Command)
localCmd.Stdin = cmd.Stdin
localCmd.Stdout = cmd.Stdout
localCmd.Stderr = cmd.Stderr
// Start it. If it doesn't work, then error right away.
if err := localCmd.Start(); err != nil {
return err
}
// We've started successfully. Start a goroutine to wait for
// it to complete and track exit status.
go func() {
var exitStatus int
err := localCmd.Wait()
if err != nil {
if exitErr, ok := err.(*exec.ExitError); ok {
exitStatus = 1
// There is no process-independent way to get the REAL
// exit status so we just try to go deeper.
if status, ok := exitErr.Sys().(syscall.WaitStatus); ok {
exitStatus = status.ExitStatus()
}
}
}
cmd.SetExited(exitStatus)
}()
return nil
}
func (c *Communicator) Upload(string, io.Reader, *os.FileInfo) error {
return fmt.Errorf("upload not supported")
}
func (c *Communicator) UploadDir(string, string, []string) error {
return fmt.Errorf("uploadDir not supported")
}
func (c *Communicator) Download(string, io.Writer) error {
return fmt.Errorf("download not supported")
}
func (c *Communicator) DownloadDir(src string, dst string, exclude []string) error {
return fmt.Errorf("downloadDir not supported")
}

View File

@ -1,51 +1,12 @@
package shell_local package shell_local
import ( import (
"bufio" sl "github.com/hashicorp/packer/common/shell-local"
"errors"
"fmt"
"io/ioutil"
"log"
"os"
"sort"
"strings"
"github.com/hashicorp/packer/common"
"github.com/hashicorp/packer/helper/config"
"github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/packer"
"github.com/hashicorp/packer/template/interpolate"
) )
type Config struct {
common.PackerConfig `mapstructure:",squash"`
// An inline script to execute. Multiple strings are all executed
// in the context of a single shell.
Inline []string
// The shebang value used when running inline scripts.
InlineShebang string `mapstructure:"inline_shebang"`
// The local path of the shell script to upload and execute.
Script string
// An array of multiple scripts to run.
Scripts []string
// An array of environment variables that will be injected before
// your command(s) are executed.
Vars []string `mapstructure:"environment_vars"`
// The command used to execute the script. The '{{ .Path }}' variable
// should be used to specify where the script goes, {{ .Vars }}
// can be used to inject the environment_vars into the environment.
ExecuteCommand string `mapstructure:"execute_command"`
ctx interpolate.Context
}
type PostProcessor struct { type PostProcessor struct {
config Config config sl.Config
} }
type ExecuteCommandTemplate struct { type ExecuteCommandTemplate struct {
@ -54,179 +15,34 @@ type ExecuteCommandTemplate struct {
} }
func (p *PostProcessor) Configure(raws ...interface{}) error { func (p *PostProcessor) Configure(raws ...interface{}) error {
err := config.Decode(&p.config, &config.DecodeOpts{ err := sl.Decode(&p.config, raws...)
Interpolate: true,
InterpolateContext: &p.config.ctx,
InterpolateFilter: &interpolate.RenderFilter{
Exclude: []string{
"execute_command",
},
},
}, raws...)
if err != nil { if err != nil {
return err return err
} }
if len(p.config.ExecuteCommand) == 1 {
if p.config.ExecuteCommand == "" { // Backwards compatibility -- before we merged the shell-local
p.config.ExecuteCommand = `chmod +x "{{.Script}}"; {{.Vars}} "{{.Script}}"` // post-processor and provisioners, the post-processor accepted
// execute_command as a string rather than a slice of strings. It didn't
// have a configurable call to shell program, automatically prepending
// the user-supplied execute_command string with "sh -c". If users are
// still using the old way of defining ExecuteCommand (by supplying a
// single string rather than a slice of strings) then we need to
// prepend this command with the call that the post-processor defaulted
// to before.
p.config.ExecuteCommand = append([]string{"sh", "-c"}, p.config.ExecuteCommand...)
} }
if p.config.Inline != nil && len(p.config.Inline) == 0 { return sl.Validate(&p.config)
p.config.Inline = nil
}
if p.config.InlineShebang == "" {
p.config.InlineShebang = "/bin/sh -e"
}
if p.config.Scripts == nil {
p.config.Scripts = make([]string, 0)
}
if p.config.Vars == nil {
p.config.Vars = make([]string, 0)
}
var errs *packer.MultiError
if p.config.Script != "" && len(p.config.Scripts) > 0 {
errs = packer.MultiErrorAppend(errs,
errors.New("Only one of script or scripts can be specified."))
}
if p.config.Script != "" {
p.config.Scripts = []string{p.config.Script}
}
if len(p.config.Scripts) == 0 && p.config.Inline == nil {
errs = packer.MultiErrorAppend(errs,
errors.New("Either a script file or inline script must be specified."))
} else if len(p.config.Scripts) > 0 && p.config.Inline != nil {
errs = packer.MultiErrorAppend(errs,
errors.New("Only a script file or an inline script can be specified, not both."))
}
for _, path := range p.config.Scripts {
if _, err := os.Stat(path); err != nil {
errs = packer.MultiErrorAppend(errs,
fmt.Errorf("Bad script '%s': %s", path, err))
}
}
// Do a check for bad environment variables, such as '=foo', 'foobar'
for _, kv := range p.config.Vars {
vs := strings.SplitN(kv, "=", 2)
if len(vs) != 2 || vs[0] == "" {
errs = packer.MultiErrorAppend(errs,
fmt.Errorf("Environment variable not in format 'key=value': %s", kv))
}
}
if errs != nil && len(errs.Errors) > 0 {
return errs
}
return nil
} }
func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (packer.Artifact, bool, error) { func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (packer.Artifact, bool, error) {
// this particular post-processor doesn't do anything with the artifact
// except to return it.
scripts := make([]string, len(p.config.Scripts)) retBool, retErr := sl.Run(ui, &p.config)
copy(scripts, p.config.Scripts) if !retBool {
return nil, retBool, retErr
// If we have an inline script, then turn that into a temporary
// shell script and use that.
if p.config.Inline != nil {
tf, err := ioutil.TempFile("", "packer-shell")
if err != nil {
return nil, false, fmt.Errorf("Error preparing shell script: %s", err)
}
defer os.Remove(tf.Name())
// Set the path to the temporary file
scripts = append(scripts, tf.Name())
// Write our contents to it
writer := bufio.NewWriter(tf)
writer.WriteString(fmt.Sprintf("#!%s\n", p.config.InlineShebang))
for _, command := range p.config.Inline {
if _, err := writer.WriteString(command + "\n"); err != nil {
return nil, false, fmt.Errorf("Error preparing shell script: %s", err)
}
}
if err := writer.Flush(); err != nil {
return nil, false, fmt.Errorf("Error preparing shell script: %s", err)
}
tf.Close()
} }
// Create environment variables to set before executing the command return artifact, retBool, retErr
flattenedEnvVars := p.createFlattenedEnvVars()
for _, script := range scripts {
p.config.ctx.Data = &ExecuteCommandTemplate{
Vars: flattenedEnvVars,
Script: script,
}
command, err := interpolate.Render(p.config.ExecuteCommand, &p.config.ctx)
if err != nil {
return nil, false, fmt.Errorf("Error processing command: %s", err)
}
ui.Say(fmt.Sprintf("Post processing with local shell script: %s", script))
comm := &Communicator{}
cmd := &packer.RemoteCmd{Command: command}
log.Printf("starting local command: %s", command)
if err := cmd.StartWithUi(comm, ui); err != nil {
return nil, false, fmt.Errorf(
"Error executing script: %s\n\n"+
"Please see output above for more information.",
script)
}
if cmd.ExitStatus != 0 {
return nil, false, fmt.Errorf(
"Erroneous exit code %d while executing script: %s\n\n"+
"Please see output above for more information.",
cmd.ExitStatus,
script)
}
}
return artifact, true, nil
}
func (p *PostProcessor) createFlattenedEnvVars() (flattened string) {
flattened = ""
envVars := make(map[string]string)
// Always available Packer provided env vars
envVars["PACKER_BUILD_NAME"] = fmt.Sprintf("%s", p.config.PackerBuildName)
envVars["PACKER_BUILDER_TYPE"] = fmt.Sprintf("%s", p.config.PackerBuilderType)
// Split vars into key/value components
for _, envVar := range p.config.Vars {
keyValue := strings.SplitN(envVar, "=", 2)
// Store pair, replacing any single quotes in value so they parse
// correctly with required environment variable format
envVars[keyValue[0]] = strings.Replace(keyValue[1], "'", `'"'"'`, -1)
}
// Create a list of env var keys in sorted order
var keys []string
for k := range envVars {
keys = append(keys, k)
}
sort.Strings(keys)
// Re-assemble vars surrounding value with single quotes and flatten
for _, key := range keys {
flattened += fmt.Sprintf("%s='%s' ", key, envVars[key])
}
return
} }

View File

@ -3,9 +3,11 @@ package shell_local
import ( import (
"io/ioutil" "io/ioutil"
"os" "os"
"runtime"
"testing" "testing"
"github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/packer"
"github.com/stretchr/testify/assert"
) )
func TestPostProcessor_ImplementsPostProcessor(t *testing.T) { func TestPostProcessor_ImplementsPostProcessor(t *testing.T) {
@ -28,32 +30,35 @@ func TestPostProcessor_Impl(t *testing.T) {
func TestPostProcessorPrepare_Defaults(t *testing.T) { func TestPostProcessorPrepare_Defaults(t *testing.T) {
var p PostProcessor var p PostProcessor
config := testConfig() raws := testConfig()
err := p.Configure(config) err := p.Configure(raws)
if err != nil { if err != nil {
t.Fatalf("err: %s", err) t.Fatalf("err: %s", err)
} }
} }
func TestPostProcessorPrepare_InlineShebang(t *testing.T) { func TestPostProcessorPrepare_InlineShebang(t *testing.T) {
config := testConfig() raws := testConfig()
delete(config, "inline_shebang") delete(raws, "inline_shebang")
p := new(PostProcessor) p := new(PostProcessor)
err := p.Configure(config) err := p.Configure(raws)
if err != nil { if err != nil {
t.Fatalf("should not have error: %s", err) t.Fatalf("should not have error: %s", err)
} }
expected := ""
if p.config.InlineShebang != "/bin/sh -e" { if runtime.GOOS != "windows" {
expected = "/bin/sh -e"
}
if p.config.InlineShebang != expected {
t.Fatalf("bad value: %s", p.config.InlineShebang) t.Fatalf("bad value: %s", p.config.InlineShebang)
} }
// Test with a good one // Test with a good one
config["inline_shebang"] = "foo" raws["inline_shebang"] = "foo"
p = new(PostProcessor) p = new(PostProcessor)
err = p.Configure(config) err = p.Configure(raws)
if err != nil { if err != nil {
t.Fatalf("should not have error: %s", err) t.Fatalf("should not have error: %s", err)
} }
@ -65,23 +70,23 @@ func TestPostProcessorPrepare_InlineShebang(t *testing.T) {
func TestPostProcessorPrepare_InvalidKey(t *testing.T) { func TestPostProcessorPrepare_InvalidKey(t *testing.T) {
var p PostProcessor var p PostProcessor
config := testConfig() raws := testConfig()
// Add a random key // Add a random key
config["i_should_not_be_valid"] = true raws["i_should_not_be_valid"] = true
err := p.Configure(config) err := p.Configure(raws)
if err == nil { if err == nil {
t.Fatal("should have error") t.Fatal("should have error")
} }
} }
func TestPostProcessorPrepare_Script(t *testing.T) { func TestPostProcessorPrepare_Script(t *testing.T) {
config := testConfig() raws := testConfig()
delete(config, "inline") delete(raws, "inline")
config["script"] = "/this/should/not/exist" raws["script"] = "/this/should/not/exist"
p := new(PostProcessor) p := new(PostProcessor)
err := p.Configure(config) err := p.Configure(raws)
if err == nil { if err == nil {
t.Fatal("should have error") t.Fatal("should have error")
} }
@ -93,23 +98,65 @@ func TestPostProcessorPrepare_Script(t *testing.T) {
} }
defer os.Remove(tf.Name()) defer os.Remove(tf.Name())
config["script"] = tf.Name() raws["script"] = tf.Name()
p = new(PostProcessor) p = new(PostProcessor)
err = p.Configure(config) err = p.Configure(raws)
if err != nil { if err != nil {
t.Fatalf("should not have error: %s", err) t.Fatalf("should not have error: %s", err)
} }
} }
func TestPostProcessorPrepare_ExecuteCommand(t *testing.T) {
// Check that passing a string will work (Backwards Compatibility)
p := new(PostProcessor)
raws := testConfig()
raws["execute_command"] = "foo bar"
err := p.Configure(raws)
expected := []string{"sh", "-c", "foo bar"}
if err != nil {
t.Fatalf("should handle backwards compatibility: %s", err)
}
assert.Equal(t, p.config.ExecuteCommand, expected,
"Did not get expected execute_command: expected: %#v; received %#v", expected, p.config.ExecuteCommand)
// Check that passing a list will work
p = new(PostProcessor)
raws = testConfig()
raws["execute_command"] = []string{"foo", "bar"}
err = p.Configure(raws)
if err != nil {
t.Fatalf("should handle backwards compatibility: %s", err)
}
expected = []string{"foo", "bar"}
assert.Equal(t, p.config.ExecuteCommand, expected,
"Did not get expected execute_command: expected: %#v; received %#v", expected, p.config.ExecuteCommand)
// Check that default is as expected
raws = testConfig()
delete(raws, "execute_command")
p = new(PostProcessor)
p.Configure(raws)
if runtime.GOOS != "windows" {
expected = []string{"/bin/sh", "-c", "{{.Vars}} {{.Script}}"}
} else {
expected = []string{"cmd", "/V", "/C", "{{.Vars}}", "call", "{{.Script}}"}
}
assert.Equal(t, p.config.ExecuteCommand, expected,
"Did not get expected default: expected: %#v; received %#v", expected, p.config.ExecuteCommand)
}
func TestPostProcessorPrepare_ScriptAndInline(t *testing.T) { func TestPostProcessorPrepare_ScriptAndInline(t *testing.T) {
var p PostProcessor var p PostProcessor
config := testConfig() raws := testConfig()
delete(config, "inline") // Error if no scripts/inline commands provided
delete(config, "script") delete(raws, "inline")
err := p.Configure(config) delete(raws, "script")
delete(raws, "command")
delete(raws, "scripts")
err := p.Configure(raws)
if err == nil { if err == nil {
t.Fatal("should have error") t.Fatalf("should error when no scripts/inline commands are provided")
} }
// Test with both // Test with both
@ -119,9 +166,9 @@ func TestPostProcessorPrepare_ScriptAndInline(t *testing.T) {
} }
defer os.Remove(tf.Name()) defer os.Remove(tf.Name())
config["inline"] = []interface{}{"foo"} raws["inline"] = []interface{}{"foo"}
config["script"] = tf.Name() raws["script"] = tf.Name()
err = p.Configure(config) err = p.Configure(raws)
if err == nil { if err == nil {
t.Fatal("should have error") t.Fatal("should have error")
} }
@ -129,7 +176,7 @@ func TestPostProcessorPrepare_ScriptAndInline(t *testing.T) {
func TestPostProcessorPrepare_ScriptAndScripts(t *testing.T) { func TestPostProcessorPrepare_ScriptAndScripts(t *testing.T) {
var p PostProcessor var p PostProcessor
config := testConfig() raws := testConfig()
// Test with both // Test with both
tf, err := ioutil.TempFile("", "packer") tf, err := ioutil.TempFile("", "packer")
@ -138,21 +185,21 @@ func TestPostProcessorPrepare_ScriptAndScripts(t *testing.T) {
} }
defer os.Remove(tf.Name()) defer os.Remove(tf.Name())
config["inline"] = []interface{}{"foo"} raws["inline"] = []interface{}{"foo"}
config["scripts"] = []string{tf.Name()} raws["scripts"] = []string{tf.Name()}
err = p.Configure(config) err = p.Configure(raws)
if err == nil { if err == nil {
t.Fatal("should have error") t.Fatal("should have error")
} }
} }
func TestPostProcessorPrepare_Scripts(t *testing.T) { func TestPostProcessorPrepare_Scripts(t *testing.T) {
config := testConfig() raws := testConfig()
delete(config, "inline") delete(raws, "inline")
config["scripts"] = []string{} raws["scripts"] = []string{}
p := new(PostProcessor) p := new(PostProcessor)
err := p.Configure(config) err := p.Configure(raws)
if err == nil { if err == nil {
t.Fatal("should have error") t.Fatal("should have error")
} }
@ -164,92 +211,55 @@ func TestPostProcessorPrepare_Scripts(t *testing.T) {
} }
defer os.Remove(tf.Name()) defer os.Remove(tf.Name())
config["scripts"] = []string{tf.Name()} raws["scripts"] = []string{tf.Name()}
p = new(PostProcessor) p = new(PostProcessor)
err = p.Configure(config) err = p.Configure(raws)
if err != nil { if err != nil {
t.Fatalf("should not have error: %s", err) t.Fatalf("should not have error: %s", err)
} }
} }
func TestPostProcessorPrepare_EnvironmentVars(t *testing.T) { func TestPostProcessorPrepare_EnvironmentVars(t *testing.T) {
config := testConfig() raws := testConfig()
// Test with a bad case // Test with a bad case
config["environment_vars"] = []string{"badvar", "good=var"} raws["environment_vars"] = []string{"badvar", "good=var"}
p := new(PostProcessor) p := new(PostProcessor)
err := p.Configure(config) err := p.Configure(raws)
if err == nil { if err == nil {
t.Fatal("should have error") t.Fatal("should have error")
} }
// Test with a trickier case // Test with a trickier case
config["environment_vars"] = []string{"=bad"} raws["environment_vars"] = []string{"=bad"}
p = new(PostProcessor) p = new(PostProcessor)
err = p.Configure(config) err = p.Configure(raws)
if err == nil { if err == nil {
t.Fatal("should have error") t.Fatal("should have error")
} }
// Test with a good case // Test with a good case
// Note: baz= is a real env variable, just empty // Note: baz= is a real env variable, just empty
config["environment_vars"] = []string{"FOO=bar", "baz="} raws["environment_vars"] = []string{"FOO=bar", "baz="}
p = new(PostProcessor) p = new(PostProcessor)
err = p.Configure(config) err = p.Configure(raws)
if err != nil { if err != nil {
t.Fatalf("should not have error: %s", err) t.Fatalf("should not have error: %s", err)
} }
// Test when the env variable value contains an equals sign // Test when the env variable value contains an equals sign
config["environment_vars"] = []string{"good=withequals=true"} raws["environment_vars"] = []string{"good=withequals=true"}
p = new(PostProcessor) p = new(PostProcessor)
err = p.Configure(config) err = p.Configure(raws)
if err != nil { if err != nil {
t.Fatalf("should not have error: %s", err) t.Fatalf("should not have error: %s", err)
} }
// Test when the env variable value starts with an equals sign // Test when the env variable value starts with an equals sign
config["environment_vars"] = []string{"good==true"} raws["environment_vars"] = []string{"good==true"}
p = new(PostProcessor) p = new(PostProcessor)
err = p.Configure(config) err = p.Configure(raws)
if err != nil { if err != nil {
t.Fatalf("should not have error: %s", err) t.Fatalf("should not have error: %s", err)
} }
} }
func TestPostProcessor_createFlattenedEnvVars(t *testing.T) {
var flattenedEnvVars string
config := testConfig()
userEnvVarTests := [][]string{
{}, // No user env var
{"FOO=bar"}, // Single user env var
{"FOO=bar's"}, // User env var with single quote in value
{"FOO=bar", "BAZ=qux"}, // Multiple user env vars
{"FOO=bar=baz"}, // User env var with value containing equals
{"FOO==bar"}, // User env var with value starting with equals
}
expected := []string{
`PACKER_BUILDER_TYPE='iso' PACKER_BUILD_NAME='vmware' `,
`FOO='bar' PACKER_BUILDER_TYPE='iso' PACKER_BUILD_NAME='vmware' `,
`FOO='bar'"'"'s' PACKER_BUILDER_TYPE='iso' PACKER_BUILD_NAME='vmware' `,
`BAZ='qux' FOO='bar' PACKER_BUILDER_TYPE='iso' PACKER_BUILD_NAME='vmware' `,
`FOO='bar=baz' PACKER_BUILDER_TYPE='iso' PACKER_BUILD_NAME='vmware' `,
`FOO='=bar' PACKER_BUILDER_TYPE='iso' PACKER_BUILD_NAME='vmware' `,
}
p := new(PostProcessor)
p.Configure(config)
// Defaults provided by Packer
p.config.PackerBuildName = "vmware"
p.config.PackerBuilderType = "iso"
for i, expectedValue := range expected {
p.config.Vars = userEnvVarTests[i]
flattenedEnvVars = p.createFlattenedEnvVars()
if flattenedEnvVars != expectedValue {
t.Fatalf("expected flattened env vars to be: %s, got %s.", expectedValue, flattenedEnvVars)
}
}
}

View File

@ -133,7 +133,15 @@ func DecompressOva(dir, src string) error {
if hdr == nil || err == io.EOF { if hdr == nil || err == io.EOF {
break break
} }
if err != nil {
return err
}
// We use the fileinfo to get the file name because we are not
// expecting path information as from the tar header. It's important
// that we not use the path name from the tar header without checking
// for the presence of `..`. If we accidentally allow for that, we can
// open ourselves up to a path traversal vulnerability.
info := hdr.FileInfo() info := hdr.FileInfo()
// Shouldn't be any directories, skip them // Shouldn't be any directories, skip them

View File

@ -1,9 +1,27 @@
package vagrant package vagrant
import ( import (
"io/ioutil"
"os"
"path/filepath"
"testing" "testing"
"github.com/stretchr/testify/assert"
) )
func TestVBoxProvider_impl(t *testing.T) { func TestVBoxProvider_impl(t *testing.T) {
var _ Provider = new(VBoxProvider) var _ Provider = new(VBoxProvider)
} }
func TestDecomressOVA(t *testing.T) {
td, err := ioutil.TempDir("", "pp-vagrant-virtualbox")
assert.NoError(t, err)
fixture := "../../common/test-fixtures/decompress-tar/outside_parent.tar"
err = DecompressOva(td, fixture)
assert.NoError(t, err)
_, err = os.Stat(filepath.Join(filepath.Base(td), "demo.poc"))
assert.Error(t, err)
_, err = os.Stat(filepath.Join(td, "demo.poc"))
assert.NoError(t, err)
os.RemoveAll(td)
}

View File

@ -0,0 +1,38 @@
package ansiblelocal
import (
"github.com/hashicorp/packer/packer"
"io"
"os"
)
type communicatorMock struct {
startCommand []string
uploadDestination []string
}
func (c *communicatorMock) Start(cmd *packer.RemoteCmd) error {
c.startCommand = append(c.startCommand, cmd.Command)
cmd.SetExited(0)
return nil
}
func (c *communicatorMock) Upload(dst string, _ io.Reader, _ *os.FileInfo) error {
c.uploadDestination = append(c.uploadDestination, dst)
return nil
}
func (c *communicatorMock) UploadDir(dst, src string, exclude []string) error {
return nil
}
func (c *communicatorMock) Download(src string, dst io.Writer) error {
return nil
}
func (c *communicatorMock) DownloadDir(src, dst string, exclude []string) error {
return nil
}
func (c *communicatorMock) verify() {
}

View File

@ -38,6 +38,9 @@ type Config struct {
// The main playbook file to execute. // The main playbook file to execute.
PlaybookFile string `mapstructure:"playbook_file"` PlaybookFile string `mapstructure:"playbook_file"`
// The playbook files to execute.
PlaybookFiles []string `mapstructure:"playbook_files"`
// An array of local paths of playbook files to upload. // An array of local paths of playbook files to upload.
PlaybookPaths []string `mapstructure:"playbook_paths"` PlaybookPaths []string `mapstructure:"playbook_paths"`
@ -66,6 +69,8 @@ type Config struct {
type Provisioner struct { type Provisioner struct {
config Config config Config
playbookFiles []string
} }
func (p *Provisioner) Prepare(raws ...interface{}) error { func (p *Provisioner) Prepare(raws ...interface{}) error {
@ -80,6 +85,9 @@ func (p *Provisioner) Prepare(raws ...interface{}) error {
return err return err
} }
// Reset the state.
p.playbookFiles = make([]string, 0, len(p.config.PlaybookFiles))
// Defaults // Defaults
if p.config.Command == "" { if p.config.Command == "" {
p.config.Command = "ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ansible-playbook" p.config.Command = "ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ansible-playbook"
@ -94,9 +102,32 @@ func (p *Provisioner) Prepare(raws ...interface{}) error {
// Validation // Validation
var errs *packer.MultiError var errs *packer.MultiError
err = validateFileConfig(p.config.PlaybookFile, "playbook_file", true)
if err != nil { // Check that either playbook_file or playbook_files is specified
errs = packer.MultiErrorAppend(errs, err) if len(p.config.PlaybookFiles) != 0 && p.config.PlaybookFile != "" {
errs = packer.MultiErrorAppend(errs, fmt.Errorf("Either playbook_file or playbook_files can be specified, not both"))
}
if len(p.config.PlaybookFiles) == 0 && p.config.PlaybookFile == "" {
errs = packer.MultiErrorAppend(errs, fmt.Errorf("Either playbook_file or playbook_files must be specified"))
}
if p.config.PlaybookFile != "" {
err = validateFileConfig(p.config.PlaybookFile, "playbook_file", true)
if err != nil {
errs = packer.MultiErrorAppend(errs, err)
}
}
for _, playbookFile := range p.config.PlaybookFiles {
if err := validateFileConfig(playbookFile, "playbook_files", true); err != nil {
errs = packer.MultiErrorAppend(errs, err)
} else {
playbookFile, err := filepath.Abs(playbookFile)
if err != nil {
errs = packer.MultiErrorAppend(errs, err)
} else {
p.playbookFiles = append(p.playbookFiles, playbookFile)
}
}
} }
// Check that the inventory file exists, if configured // Check that the inventory file exists, if configured
@ -169,11 +200,15 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error {
} }
} }
ui.Message("Uploading main Playbook file...") if p.config.PlaybookFile != "" {
src := p.config.PlaybookFile ui.Message("Uploading main Playbook file...")
dst := filepath.ToSlash(filepath.Join(p.config.StagingDir, filepath.Base(src))) src := p.config.PlaybookFile
if err := p.uploadFile(ui, comm, dst, src); err != nil { dst := filepath.ToSlash(filepath.Join(p.config.StagingDir, filepath.Base(src)))
return fmt.Errorf("Error uploading main playbook: %s", err) if err := p.uploadFile(ui, comm, dst, src); err != nil {
return fmt.Errorf("Error uploading main playbook: %s", err)
}
} else if err := p.provisionPlaybookFiles(ui, comm); err != nil {
return err
} }
if len(p.config.InventoryFile) == 0 { if len(p.config.InventoryFile) == 0 {
@ -204,16 +239,16 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error {
if len(p.config.GalaxyFile) > 0 { if len(p.config.GalaxyFile) > 0 {
ui.Message("Uploading galaxy file...") ui.Message("Uploading galaxy file...")
src = p.config.GalaxyFile src := p.config.GalaxyFile
dst = filepath.ToSlash(filepath.Join(p.config.StagingDir, filepath.Base(src))) dst := filepath.ToSlash(filepath.Join(p.config.StagingDir, filepath.Base(src)))
if err := p.uploadFile(ui, comm, dst, src); err != nil { if err := p.uploadFile(ui, comm, dst, src); err != nil {
return fmt.Errorf("Error uploading galaxy file: %s", err) return fmt.Errorf("Error uploading galaxy file: %s", err)
} }
} }
ui.Message("Uploading inventory file...") ui.Message("Uploading inventory file...")
src = p.config.InventoryFile src := p.config.InventoryFile
dst = filepath.ToSlash(filepath.Join(p.config.StagingDir, filepath.Base(src))) dst := filepath.ToSlash(filepath.Join(p.config.StagingDir, filepath.Base(src)))
if err := p.uploadFile(ui, comm, dst, src); err != nil { if err := p.uploadFile(ui, comm, dst, src); err != nil {
return fmt.Errorf("Error uploading inventory file: %s", err) return fmt.Errorf("Error uploading inventory file: %s", err)
} }
@ -279,6 +314,44 @@ func (p *Provisioner) Cancel() {
os.Exit(0) os.Exit(0)
} }
func (p *Provisioner) provisionPlaybookFiles(ui packer.Ui, comm packer.Communicator) error {
var playbookDir string
if p.config.PlaybookDir != "" {
var err error
playbookDir, err = filepath.Abs(p.config.PlaybookDir)
if err != nil {
return err
}
}
for index, playbookFile := range p.playbookFiles {
if playbookDir != "" && strings.HasPrefix(playbookFile, playbookDir) {
p.playbookFiles[index] = strings.TrimPrefix(playbookFile, playbookDir)
continue
}
if err := p.provisionPlaybookFile(ui, comm, playbookFile); err != nil {
return err
}
}
return nil
}
func (p *Provisioner) provisionPlaybookFile(ui packer.Ui, comm packer.Communicator, playbookFile string) error {
ui.Message(fmt.Sprintf("Uploading playbook file: %s", playbookFile))
remoteDir := filepath.ToSlash(filepath.Join(p.config.StagingDir, filepath.Dir(playbookFile)))
remotePlaybookFile := filepath.ToSlash(filepath.Join(p.config.StagingDir, playbookFile))
if err := p.createDir(ui, comm, remoteDir); err != nil {
return fmt.Errorf("Error uploading playbook file: %s [%s]", playbookFile, err)
}
if err := p.uploadFile(ui, comm, remotePlaybookFile, playbookFile); err != nil {
return fmt.Errorf("Error uploading playbook: %s [%s]", playbookFile, err)
}
return nil
}
func (p *Provisioner) executeGalaxy(ui packer.Ui, comm packer.Communicator) error { func (p *Provisioner) executeGalaxy(ui packer.Ui, comm packer.Communicator) error {
rolesDir := filepath.ToSlash(filepath.Join(p.config.StagingDir, "roles")) rolesDir := filepath.ToSlash(filepath.Join(p.config.StagingDir, "roles"))
galaxyFile := filepath.ToSlash(filepath.Join(p.config.StagingDir, filepath.Base(p.config.GalaxyFile))) galaxyFile := filepath.ToSlash(filepath.Join(p.config.StagingDir, filepath.Base(p.config.GalaxyFile)))
@ -301,7 +374,6 @@ func (p *Provisioner) executeGalaxy(ui packer.Ui, comm packer.Communicator) erro
} }
func (p *Provisioner) executeAnsible(ui packer.Ui, comm packer.Communicator) error { func (p *Provisioner) executeAnsible(ui packer.Ui, comm packer.Communicator) error {
playbook := filepath.ToSlash(filepath.Join(p.config.StagingDir, filepath.Base(p.config.PlaybookFile)))
inventory := filepath.ToSlash(filepath.Join(p.config.StagingDir, filepath.Base(p.config.InventoryFile))) inventory := filepath.ToSlash(filepath.Join(p.config.StagingDir, filepath.Base(p.config.InventoryFile)))
extraArgs := fmt.Sprintf(" --extra-vars \"packer_build_name=%s packer_builder_type=%s packer_http_addr=%s\" ", extraArgs := fmt.Sprintf(" --extra-vars \"packer_build_name=%s packer_builder_type=%s packer_http_addr=%s\" ",
@ -317,8 +389,28 @@ func (p *Provisioner) executeAnsible(ui packer.Ui, comm packer.Communicator) err
} }
} }
if p.config.PlaybookFile != "" {
playbookFile := filepath.ToSlash(filepath.Join(p.config.StagingDir, filepath.Base(p.config.PlaybookFile)))
if err := p.executeAnsiblePlaybook(ui, comm, playbookFile, extraArgs, inventory); err != nil {
return err
}
}
for _, playbookFile := range p.playbookFiles {
playbookFile = filepath.ToSlash(filepath.Join(p.config.StagingDir, playbookFile))
if err := p.executeAnsiblePlaybook(ui, comm, playbookFile, extraArgs, inventory); err != nil {
return err
}
}
return nil
}
func (p *Provisioner) executeAnsiblePlaybook(
ui packer.Ui, comm packer.Communicator, playbookFile, extraArgs, inventory string,
) error {
command := fmt.Sprintf("cd %s && %s %s%s -c local -i %s", command := fmt.Sprintf("cd %s && %s %s%s -c local -i %s",
p.config.StagingDir, p.config.Command, playbook, extraArgs, inventory) p.config.StagingDir, p.config.Command, playbookFile, extraArgs, inventory,
)
ui.Message(fmt.Sprintf("Executing Ansible: %s", command)) ui.Message(fmt.Sprintf("Executing Ansible: %s", command))
cmd := &packer.RemoteCmd{ cmd := &packer.RemoteCmd{
Command: command, Command: command,

View File

@ -7,14 +7,14 @@ import (
"strings" "strings"
"testing" "testing"
"fmt"
"github.com/hashicorp/packer/builder/docker"
"github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/packer"
"github.com/hashicorp/packer/provisioner/file"
"github.com/hashicorp/packer/template"
"os/exec"
) )
func testConfig() map[string]interface{} {
m := make(map[string]interface{})
return m
}
func TestProvisioner_Impl(t *testing.T) { func TestProvisioner_Impl(t *testing.T) {
var raw interface{} var raw interface{}
raw = &Provisioner{} raw = &Provisioner{}
@ -73,6 +73,107 @@ func TestProvisionerPrepare_PlaybookFile(t *testing.T) {
} }
} }
func TestProvisionerPrepare_PlaybookFiles(t *testing.T) {
var p Provisioner
config := testConfig()
err := p.Prepare(config)
if err == nil {
t.Fatal("should have error")
}
config["playbook_file"] = ""
config["playbook_files"] = []string{}
err = p.Prepare(config)
if err == nil {
t.Fatal("should have error")
}
playbook_file, err := ioutil.TempFile("", "playbook")
if err != nil {
t.Fatalf("err: %s", err)
}
defer os.Remove(playbook_file.Name())
config["playbook_file"] = playbook_file.Name()
config["playbook_files"] = []string{"some_other_file"}
err = p.Prepare(config)
if err == nil {
t.Fatal("should have error")
}
p = Provisioner{}
config["playbook_file"] = playbook_file.Name()
config["playbook_files"] = []string{}
err = p.Prepare(config)
if err != nil {
t.Fatalf("err: %s", err)
}
config["playbook_file"] = ""
config["playbook_files"] = []string{playbook_file.Name()}
err = p.Prepare(config)
if err != nil {
t.Fatalf("err: %s", err)
}
}
func TestProvisionerProvision_PlaybookFiles(t *testing.T) {
var p Provisioner
config := testConfig()
playbooks := createTempFiles("", 3)
defer removeFiles(playbooks...)
config["playbook_files"] = playbooks
err := p.Prepare(config)
if err != nil {
t.Fatalf("err: %s", err)
}
comm := &communicatorMock{}
if err := p.Provision(&uiStub{}, comm); err != nil {
t.Fatalf("err: %s", err)
}
assertPlaybooksUploaded(comm, playbooks)
assertPlaybooksExecuted(comm, playbooks)
}
func TestProvisionerProvision_PlaybookFilesWithPlaybookDir(t *testing.T) {
var p Provisioner
config := testConfig()
playbook_dir, err := ioutil.TempDir("", "")
if err != nil {
t.Fatalf("Failed to create playbook_dir: %s", err)
}
defer os.RemoveAll(playbook_dir)
playbooks := createTempFiles(playbook_dir, 3)
playbookNames := make([]string, 0, len(playbooks))
playbooksInPlaybookDir := make([]string, 0, len(playbooks))
for _, playbook := range playbooks {
playbooksInPlaybookDir = append(playbooksInPlaybookDir, strings.TrimPrefix(playbook, playbook_dir))
playbookNames = append(playbookNames, filepath.Base(playbook))
}
config["playbook_files"] = playbooks
config["playbook_dir"] = playbook_dir
err = p.Prepare(config)
if err != nil {
t.Fatalf("err: %s", err)
}
comm := &communicatorMock{}
if err := p.Provision(&uiStub{}, comm); err != nil {
t.Fatalf("err: %s", err)
}
assertPlaybooksNotUploaded(comm, playbookNames)
assertPlaybooksExecuted(comm, playbooksInPlaybookDir)
}
func TestProvisionerPrepare_InventoryFile(t *testing.T) { func TestProvisionerPrepare_InventoryFile(t *testing.T) {
var p Provisioner var p Provisioner
config := testConfig() config := testConfig()
@ -211,3 +312,216 @@ func TestProvisionerPrepare_CleanStagingDir(t *testing.T) {
t.Fatalf("expected clean_staging_directory to be set") t.Fatalf("expected clean_staging_directory to be set")
} }
} }
func TestProvisionerProvisionDocker_PlaybookFiles(t *testing.T) {
testProvisionerProvisionDockerWithPlaybookFiles(t, playbookFilesDockerTemplate)
}
func TestProvisionerProvisionDocker_PlaybookFilesWithPlaybookDir(t *testing.T) {
testProvisionerProvisionDockerWithPlaybookFiles(t, playbookFilesWithPlaybookDirDockerTemplate)
}
func testProvisionerProvisionDockerWithPlaybookFiles(t *testing.T, templateString string) {
if os.Getenv("PACKER_ACC") == "" {
t.Skip("This test is only run with PACKER_ACC=1")
}
ui := packer.TestUi(t)
cache := &packer.FileCache{CacheDir: os.TempDir()}
tpl, err := template.Parse(strings.NewReader(templateString))
if err != nil {
t.Fatalf("Unable to parse config: %s", err)
}
// Check if docker executable can be found.
_, err = exec.LookPath("docker")
if err != nil {
t.Error("docker command not found; please make sure docker is installed")
}
// Setup the builder
builder := &docker.Builder{}
warnings, err := builder.Prepare(tpl.Builders["docker"].Config)
if err != nil {
t.Fatalf("Error preparing configuration %s", err)
}
if len(warnings) > 0 {
t.Fatal("Encountered configuration warnings; aborting")
}
ansible := &Provisioner{}
err = ansible.Prepare(tpl.Provisioners[0].Config)
if err != nil {
t.Fatalf("Error preparing ansible-local provisioner: %s", err)
}
download := &file.Provisioner{}
err = download.Prepare(tpl.Provisioners[1].Config)
if err != nil {
t.Fatalf("Error preparing download: %s", err)
}
// Add hooks so the provisioners run during the build
hooks := map[string][]packer.Hook{}
hooks[packer.HookProvision] = []packer.Hook{
&packer.ProvisionHook{
Provisioners: []*packer.HookedProvisioner{
{ansible, nil, ""},
{download, nil, ""},
},
},
}
hook := &packer.DispatchHook{Mapping: hooks}
artifact, err := builder.Run(ui, hook, cache)
if err != nil {
t.Fatalf("Error running build %s", err)
}
defer os.Remove("hello_world")
defer artifact.Destroy()
actualContent, err := ioutil.ReadFile("hello_world")
if err != nil {
t.Fatalf("Expected file not found: %s", err)
}
expectedContent := "Hello world!"
if string(actualContent) != expectedContent {
t.Fatalf(`Unexpected file content: expected="%s", actual="%s"`, expectedContent, actualContent)
}
}
func assertPlaybooksExecuted(comm *communicatorMock, playbooks []string) {
cmdIndex := 0
for _, playbook := range playbooks {
playbook = filepath.ToSlash(playbook)
for ; cmdIndex < len(comm.startCommand); cmdIndex++ {
cmd := comm.startCommand[cmdIndex]
if strings.Contains(cmd, "ansible-playbook") && strings.Contains(cmd, playbook) {
break
}
}
if cmdIndex == len(comm.startCommand) {
panic(fmt.Sprintf("Playbook %s was not executed", playbook))
}
}
}
func assertPlaybooksUploaded(comm *communicatorMock, playbooks []string) {
uploadIndex := 0
for _, playbook := range playbooks {
playbook = filepath.ToSlash(playbook)
for ; uploadIndex < len(comm.uploadDestination); uploadIndex++ {
dest := comm.uploadDestination[uploadIndex]
if strings.HasSuffix(dest, playbook) {
break
}
}
if uploadIndex == len(comm.uploadDestination) {
panic(fmt.Sprintf("Playbook %s was not uploaded", playbook))
}
}
}
func assertPlaybooksNotUploaded(comm *communicatorMock, playbooks []string) {
for _, playbook := range playbooks {
playbook = filepath.ToSlash(playbook)
for _, destination := range comm.uploadDestination {
if strings.HasSuffix(destination, playbook) {
panic(fmt.Sprintf("Playbook %s was uploaded", playbook))
}
}
}
}
func testConfig() map[string]interface{} {
m := make(map[string]interface{})
return m
}
func createTempFile(dir string) string {
file, err := ioutil.TempFile(dir, "")
if err != nil {
panic(fmt.Sprintf("err: %s", err))
}
return file.Name()
}
func createTempFiles(dir string, numFiles int) []string {
files := make([]string, 0, numFiles)
defer func() {
// Cleanup the files if not all were created.
if len(files) < numFiles {
for _, file := range files {
os.Remove(file)
}
}
}()
for i := 0; i < numFiles; i++ {
files = append(files, createTempFile(dir))
}
return files
}
func removeFiles(files ...string) {
for _, file := range files {
os.Remove(file)
}
}
const playbookFilesDockerTemplate = `
{
"builders": [
{
"type": "docker",
"image": "williamyeh/ansible:centos7",
"discard": true
}
],
"provisioners": [
{
"type": "ansible-local",
"playbook_files": [
"test-fixtures/hello.yml",
"test-fixtures/world.yml"
]
},
{
"type": "file",
"source": "/tmp/hello_world",
"destination": "hello_world",
"direction": "download"
}
]
}
`
const playbookFilesWithPlaybookDirDockerTemplate = `
{
"builders": [
{
"type": "docker",
"image": "williamyeh/ansible:centos7",
"discard": true
}
],
"provisioners": [
{
"type": "ansible-local",
"playbook_files": [
"test-fixtures/hello.yml",
"test-fixtures/world.yml"
],
"playbook_dir": "test-fixtures"
},
{
"type": "file",
"source": "/tmp/hello_world",
"destination": "hello_world",
"direction": "download"
}
]
}
`

View File

@ -0,0 +1,5 @@
---
- hosts: all
tasks:
- name: write Hello
shell: echo -n "Hello" >> /tmp/hello_world

View File

@ -0,0 +1,5 @@
---
- hosts: all
tasks:
- name: write world!
shell: echo -n " world!" >> /tmp/hello_world

View File

@ -0,0 +1,15 @@
package ansiblelocal
type uiStub struct{}
func (su *uiStub) Ask(string) (string, error) {
return "", nil
}
func (su *uiStub) Error(string) {}
func (su *uiStub) Machine(string, ...string) {}
func (su *uiStub) Message(string) {}
func (su *uiStub) Say(msg string) {}

View File

@ -1,5 +1,5 @@
// This package implements a provisioner for Packer that executes // This package implements a provisioner for Packer that executes powershell
// powershell scripts within the remote machine. // scripts within the remote machine.
package powershell package powershell
import ( import (
@ -39,8 +39,8 @@ type Config struct {
// converted from Windows to Unix-style. // converted from Windows to Unix-style.
Binary bool Binary bool
// An inline script to execute. Multiple strings are all executed // An inline script to execute. Multiple strings are all executed in the
// in the context of a single shell. // context of a single shell.
Inline []string Inline []string
// The local path of the powershell script to upload and execute. // The local path of the powershell script to upload and execute.
@ -49,32 +49,33 @@ type Config struct {
// An array of multiple scripts to run. // An array of multiple scripts to run.
Scripts []string Scripts []string
// An array of environment variables that will be injected before // An array of environment variables that will be injected before your
// your command(s) are executed. // command(s) are executed.
Vars []string `mapstructure:"environment_vars"` Vars []string `mapstructure:"environment_vars"`
// The remote path where the local powershell script will be uploaded to. // The remote path where the local powershell script will be uploaded to.
// This should be set to a writable file that is in a pre-existing directory. // This should be set to a writable file that is in a pre-existing
// directory.
RemotePath string `mapstructure:"remote_path"` RemotePath string `mapstructure:"remote_path"`
// The remote path where the file containing the environment variables // The remote path where the file containing the environment variables
// will be uploaded to. This should be set to a writable file that is // will be uploaded to. This should be set to a writable file that is in a
// in a pre-existing directory. // pre-existing directory.
RemoteEnvVarPath string `mapstructure:"remote_env_var_path"` RemoteEnvVarPath string `mapstructure:"remote_env_var_path"`
// The command used to execute the script. The '{{ .Path }}' variable // The command used to execute the script. The '{{ .Path }}' variable
// should be used to specify where the script goes, {{ .Vars }} // should be used to specify where the script goes, {{ .Vars }} can be
// can be used to inject the environment_vars into the environment. // used to inject the environment_vars into the environment.
ExecuteCommand string `mapstructure:"execute_command"` ExecuteCommand string `mapstructure:"execute_command"`
// The command used to execute the elevated script. The '{{ .Path }}' variable // The command used to execute the elevated script. The '{{ .Path }}'
// should be used to specify where the script goes, {{ .Vars }} // variable should be used to specify where the script goes, {{ .Vars }}
// can be used to inject the environment_vars into the environment. // can be used to inject the environment_vars into the environment.
ElevatedExecuteCommand string `mapstructure:"elevated_execute_command"` ElevatedExecuteCommand string `mapstructure:"elevated_execute_command"`
// The timeout for retrying to start the process. Until this timeout // The timeout for retrying to start the process. Until this timeout is
// is reached, if the provisioner can't start a process, it retries. // reached, if the provisioner can't start a process, it retries. This
// This can be set high to allow for reboots. // can be set high to allow for reboots.
StartRetryTimeout time.Duration `mapstructure:"start_retry_timeout"` StartRetryTimeout time.Duration `mapstructure:"start_retry_timeout"`
// This is used in the template generation to format environment variables // This is used in the template generation to format environment variables
@ -85,15 +86,16 @@ type Config struct {
// inside the `ElevatedExecuteCommand` template. // inside the `ElevatedExecuteCommand` template.
ElevatedEnvVarFormat string `mapstructure:"elevated_env_var_format"` ElevatedEnvVarFormat string `mapstructure:"elevated_env_var_format"`
// Instructs the communicator to run the remote script as a // Instructs the communicator to run the remote script as a Windows
// Windows scheduled task, effectively elevating the remote // scheduled task, effectively elevating the remote user by impersonating
// user by impersonating a logged-in user // a logged-in user
ElevatedUser string `mapstructure:"elevated_user"` ElevatedUser string `mapstructure:"elevated_user"`
ElevatedPassword string `mapstructure:"elevated_password"` ElevatedPassword string `mapstructure:"elevated_password"`
// Valid Exit Codes - 0 is not always the only valid error code! // Valid Exit Codes - 0 is not always the only valid error code! See
// See http://www.symantec.com/connect/articles/windows-system-error-codes-exit-codes-description for examples // http://www.symantec.com/connect/articles/windows-system-error-codes-exit-codes-description
// such as 3010 - "The requested operation is successful. Changes will not be effective until the system is rebooted." // for examples such as 3010 - "The requested operation is successful.
// Changes will not be effective until the system is rebooted."
ValidExitCodes []int `mapstructure:"valid_exit_codes"` ValidExitCodes []int `mapstructure:"valid_exit_codes"`
ctx interpolate.Context ctx interpolate.Context
@ -115,7 +117,8 @@ type EnvVarsTemplate struct {
} }
func (p *Provisioner) Prepare(raws ...interface{}) error { func (p *Provisioner) Prepare(raws ...interface{}) error {
//Create passthrough for winrm password so we can fill it in once we know it // Create passthrough for winrm password so we can fill it in once we know
// it
p.config.ctx.Data = &EnvVarsTemplate{ p.config.ctx.Data = &EnvVarsTemplate{
WinRMPassword: `{{.WinRMPassword}}`, WinRMPassword: `{{.WinRMPassword}}`,
} }
@ -232,9 +235,8 @@ func (p *Provisioner) Prepare(raws ...interface{}) error {
return nil return nil
} }
// Takes the inline scripts, concatenates them // Takes the inline scripts, concatenates them into a temporary file and
// into a temporary file and returns a string containing the location // returns a string containing the location of said file.
// of said file.
func extractScript(p *Provisioner) (string, error) { func extractScript(p *Provisioner) (string, error) {
temp, err := ioutil.TempFile(os.TempDir(), "packer-powershell-provisioner") temp, err := ioutil.TempFile(os.TempDir(), "packer-powershell-provisioner")
if err != nil { if err != nil {
@ -288,11 +290,10 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error {
return fmt.Errorf("Error processing command: %s", err) return fmt.Errorf("Error processing command: %s", err)
} }
// Upload the file and run the command. Do this in the context of // Upload the file and run the command. Do this in the context of a
// a single retryable function so that we don't end up with // single retryable function so that we don't end up with the case
// the case that the upload succeeded, a restart is initiated, // that the upload succeeded, a restart is initiated, and then the
// and then the command is executed but the file doesn't exist // command is executed but the file doesn't exist any longer.
// any longer.
var cmd *packer.RemoteCmd var cmd *packer.RemoteCmd
err = p.retryable(func() error { err = p.retryable(func() error {
if _, err := f.Seek(0, 0); err != nil { if _, err := f.Seek(0, 0); err != nil {
@ -330,13 +331,13 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error {
} }
func (p *Provisioner) Cancel() { func (p *Provisioner) Cancel() {
// Just hard quit. It isn't a big deal if what we're doing keeps // Just hard quit. It isn't a big deal if what we're doing keeps running
// running on the other side. // on the other side.
os.Exit(0) os.Exit(0)
} }
// retryable will retry the given function over and over until a // retryable will retry the given function over and over until a non-error is
// non-error is returned. // returned.
func (p *Provisioner) retryable(f func() error) error { func (p *Provisioner) retryable(f func() error) error {
startTimeout := time.After(p.config.StartRetryTimeout) startTimeout := time.After(p.config.StartRetryTimeout)
for { for {
@ -349,9 +350,8 @@ func (p *Provisioner) retryable(f func() error) error {
err = fmt.Errorf("Retryable error: %s", err) err = fmt.Errorf("Retryable error: %s", err)
log.Print(err.Error()) log.Print(err.Error())
// Check if we timed out, otherwise we retry. It is safe to // Check if we timed out, otherwise we retry. It is safe to retry
// retry since the only error case above is if the command // since the only error case above is if the command failed to START.
// failed to START.
select { select {
case <-startTimeout: case <-startTimeout:
return err return err
@ -361,12 +361,15 @@ func (p *Provisioner) retryable(f func() error) error {
} }
} }
// Environment variables required within the remote environment are uploaded within a PS script and // Environment variables required within the remote environment are uploaded
// then enabled by 'dot sourcing' the script immediately prior to execution of the main command // within a PS script and then enabled by 'dot sourcing' the script
// immediately prior to execution of the main command
func (p *Provisioner) prepareEnvVars(elevated bool) (err error) { func (p *Provisioner) prepareEnvVars(elevated bool) (err error) {
// Collate all required env vars into a plain string with required formatting applied // Collate all required env vars into a plain string with required
// formatting applied
flattenedEnvVars := p.createFlattenedEnvVars(elevated) flattenedEnvVars := p.createFlattenedEnvVars(elevated)
// Create a powershell script on the target build fs containing the flattened env vars // Create a powershell script on the target build fs containing the
// flattened env vars
err = p.uploadEnvVars(flattenedEnvVars) err = p.uploadEnvVars(flattenedEnvVars)
if err != nil { if err != nil {
return err return err
@ -426,12 +429,20 @@ func (p *Provisioner) createFlattenedEnvVars(elevated bool) (flattened string) {
} }
func (p *Provisioner) uploadEnvVars(flattenedEnvVars string) (err error) { func (p *Provisioner) uploadEnvVars(flattenedEnvVars string) (err error) {
// Upload all env vars to a powershell script on the target build file system // Upload all env vars to a powershell script on the target build file
// system. Do this in the context of a single retryable function so that
// we gracefully handle any errors created by transient conditions such as
// a system restart
envVarReader := strings.NewReader(flattenedEnvVars) envVarReader := strings.NewReader(flattenedEnvVars)
log.Printf("Uploading env vars to %s", p.config.RemoteEnvVarPath) log.Printf("Uploading env vars to %s", p.config.RemoteEnvVarPath)
err = p.communicator.Upload(p.config.RemoteEnvVarPath, envVarReader, nil) err = p.retryable(func() error {
if err := p.communicator.Upload(p.config.RemoteEnvVarPath, envVarReader, nil); err != nil {
return fmt.Errorf("Error uploading ps script containing env vars: %s", err)
}
return err
})
if err != nil { if err != nil {
return fmt.Errorf("Error uploading ps script containing env vars: %s", err) return err
} }
return return
} }
@ -446,7 +457,8 @@ func (p *Provisioner) createCommandText() (command string, err error) {
} }
func (p *Provisioner) createCommandTextNonPrivileged() (command string, err error) { func (p *Provisioner) createCommandTextNonPrivileged() (command string, err error) {
// Prepare everything needed to enable the required env vars within the remote environment // Prepare everything needed to enable the required env vars within the
// remote environment
err = p.prepareEnvVars(false) err = p.prepareEnvVars(false)
if err != nil { if err != nil {
return "", err return "", err
@ -473,7 +485,8 @@ func getWinRMPassword(buildName string) string {
} }
func (p *Provisioner) createCommandTextPrivileged() (command string, err error) { func (p *Provisioner) createCommandTextPrivileged() (command string, err error) {
// Prepare everything needed to enable the required env vars within the remote environment // Prepare everything needed to enable the required env vars within the
// remote environment
err = p.prepareEnvVars(true) err = p.prepareEnvVars(true)
if err != nil { if err != nil {
return "", err return "", err
@ -489,8 +502,9 @@ func (p *Provisioner) createCommandTextPrivileged() (command string, err error)
return "", fmt.Errorf("Error processing command: %s", err) return "", fmt.Errorf("Error processing command: %s", err)
} }
// OK so we need an elevated shell runner to wrap our command, this is going to have its own path // OK so we need an elevated shell runner to wrap our command, this is
// generate the script and update the command runner in the process // going to have its own path generate the script and update the command
// runner in the process
path, err := p.generateElevatedRunner(command) path, err := p.generateElevatedRunner(command)
if err != nil { if err != nil {
return "", fmt.Errorf("Error generating elevated runner: %s", err) return "", fmt.Errorf("Error generating elevated runner: %s", err)
@ -507,23 +521,23 @@ func (p *Provisioner) generateElevatedRunner(command string) (uploadedPath strin
var buffer bytes.Buffer var buffer bytes.Buffer
// Output from the elevated command cannot be returned directly to // Output from the elevated command cannot be returned directly to the
// the Packer console. In order to be able to view output from elevated // Packer console. In order to be able to view output from elevated
// commands and scripts an indirect approach is used by which the // commands and scripts an indirect approach is used by which the commands
// commands output is first redirected to file. The output file is then // output is first redirected to file. The output file is then 'watched'
// 'watched' by Packer while the elevated command is running and any // by Packer while the elevated command is running and any content
// content appearing in the file is written out to the console. // appearing in the file is written out to the console. Below the portion
// Below the portion of command required to redirect output from the // of command required to redirect output from the command to file is
// command to file is built and appended to the existing command string // built and appended to the existing command string
taskName := fmt.Sprintf("packer-%s", uuid.TimeOrderedUUID()) taskName := fmt.Sprintf("packer-%s", uuid.TimeOrderedUUID())
// Only use %ENVVAR% format for environment variables when setting // Only use %ENVVAR% format for environment variables when setting the log
// the log file path; Do NOT use $env:ENVVAR format as it won't be // file path; Do NOT use $env:ENVVAR format as it won't be expanded
// expanded correctly in the elevatedTemplate // correctly in the elevatedTemplate
logFile := `%SYSTEMROOT%/Temp/` + taskName + ".out" logFile := `%SYSTEMROOT%/Temp/` + taskName + ".out"
command += fmt.Sprintf(" > %s 2>&1", logFile) command += fmt.Sprintf(" > %s 2>&1", logFile)
// elevatedTemplate wraps the command in a single quoted XML text // elevatedTemplate wraps the command in a single quoted XML text string
// string so we need to escape characters considered 'special' in XML. // so we need to escape characters considered 'special' in XML.
err = xml.EscapeText(&buffer, []byte(command)) err = xml.EscapeText(&buffer, []byte(command))
if err != nil { if err != nil {
return "", fmt.Errorf("Error escaping characters special to XML in command %s: %s", command, err) return "", fmt.Errorf("Error escaping characters special to XML in command %s: %s", command, err)

View File

@ -1,45 +0,0 @@
package shell
import (
"bytes"
"runtime"
"strings"
"testing"
"github.com/hashicorp/packer/packer"
)
func TestCommunicator_impl(t *testing.T) {
var _ packer.Communicator = new(Communicator)
}
func TestCommunicator(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip("windows not supported for this test")
return
}
c := &Communicator{
ExecuteCommand: []string{"/bin/sh", "-c", "{{.Command}}"},
}
var buf bytes.Buffer
cmd := &packer.RemoteCmd{
Command: "echo foo",
Stdout: &buf,
}
if err := c.Start(cmd); err != nil {
t.Fatalf("err: %s", err)
}
cmd.Wait()
if cmd.ExitStatus != 0 {
t.Fatalf("err bad exit status: %d", cmd.ExitStatus)
}
if strings.TrimSpace(buf.String()) != "foo" {
t.Fatalf("bad: %s", buf.String())
}
}

View File

@ -1,105 +1,32 @@
package shell package shell
import ( import (
"errors" sl "github.com/hashicorp/packer/common/shell-local"
"fmt"
"runtime"
"github.com/hashicorp/packer/common"
"github.com/hashicorp/packer/helper/config"
"github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/packer"
"github.com/hashicorp/packer/template/interpolate"
) )
type Config struct {
common.PackerConfig `mapstructure:",squash"`
// Command is the command to execute
Command string
// ExecuteCommand is the command used to execute the command.
ExecuteCommand []string `mapstructure:"execute_command"`
ctx interpolate.Context
}
type Provisioner struct { type Provisioner struct {
config Config config sl.Config
} }
func (p *Provisioner) Prepare(raws ...interface{}) error { func (p *Provisioner) Prepare(raws ...interface{}) error {
err := config.Decode(&p.config, &config.DecodeOpts{ err := sl.Decode(&p.config, raws...)
Interpolate: true,
InterpolateContext: &p.config.ctx,
InterpolateFilter: &interpolate.RenderFilter{
Exclude: []string{
"execute_command",
},
},
}, raws...)
if err != nil { if err != nil {
return err return err
} }
if len(p.config.ExecuteCommand) == 0 { err = sl.Validate(&p.config)
if runtime.GOOS == "windows" { if err != nil {
p.config.ExecuteCommand = []string{ return err
"cmd",
"/C",
"{{.Command}}",
}
} else {
p.config.ExecuteCommand = []string{
"/bin/sh",
"-c",
"{{.Command}}",
}
}
}
var errs *packer.MultiError
if p.config.Command == "" {
errs = packer.MultiErrorAppend(errs,
errors.New("command must be specified"))
}
if len(p.config.ExecuteCommand) == 0 {
errs = packer.MultiErrorAppend(errs,
errors.New("execute_command must not be empty"))
}
if errs != nil && len(errs.Errors) > 0 {
return errs
} }
return nil return nil
} }
func (p *Provisioner) Provision(ui packer.Ui, _ packer.Communicator) error { func (p *Provisioner) Provision(ui packer.Ui, _ packer.Communicator) error {
// Make another communicator for local _, retErr := sl.Run(ui, &p.config)
comm := &Communicator{ if retErr != nil {
Ctx: p.config.ctx, return retErr
ExecuteCommand: p.config.ExecuteCommand,
}
// Build the remote command
cmd := &packer.RemoteCmd{Command: p.config.Command}
ui.Say(fmt.Sprintf(
"Executing local command: %s",
p.config.Command))
if err := cmd.StartWithUi(comm, ui); err != nil {
return fmt.Errorf(
"Error executing command: %s\n\n"+
"Please see output above for more information.",
p.config.Command)
}
if cmd.ExitStatus != 0 {
return fmt.Errorf(
"Erroneous exit code %d while executing command: %s\n\n"+
"Please see output above for more information.",
cmd.ExitStatus,
p.config.Command)
} }
return nil return nil

View File

@ -1,14 +0,0 @@
#!/usr/bin/env bash
# Check gofmt
echo "==> Checking that code complies with gofmt requirements..."
gofmt_files=$(gofmt -s -l ${@})
if [[ -n ${gofmt_files} ]]; then
echo 'gofmt needs running on the following files:'
echo "${gofmt_files}"
echo "You can use the command: \`make fmt\` to reformat code."
exit 1
fi
echo "Check passed."
exit 0

View File

@ -56,8 +56,9 @@ This simple parsing example:
is directly mapped to: is directly mapped to:
```go ```go
if token, err := request.ParseFromRequest(tokenString, request.OAuth2Extractor, req, keyLookupFunc); err == nil { if token, err := request.ParseFromRequest(req, request.OAuth2Extractor, keyLookupFunc); err == nil {
fmt.Printf("Token for user %v expires %v", token.Claims["user"], token.Claims["exp"]) claims := token.Claims.(jwt.MapClaims)
fmt.Printf("Token for user %v expires %v", claims["user"], claims["exp"])
} }
``` ```

View File

@ -1,11 +1,15 @@
A [go](http://www.golang.org) (or 'golang' for search engine friendliness) implementation of [JSON Web Tokens](http://self-issued.info/docs/draft-ietf-oauth-json-web-token.html) # jwt-go
[![Build Status](https://travis-ci.org/dgrijalva/jwt-go.svg?branch=master)](https://travis-ci.org/dgrijalva/jwt-go) [![Build Status](https://travis-ci.org/dgrijalva/jwt-go.svg?branch=master)](https://travis-ci.org/dgrijalva/jwt-go)
[![GoDoc](https://godoc.org/github.com/dgrijalva/jwt-go?status.svg)](https://godoc.org/github.com/dgrijalva/jwt-go)
**BREAKING CHANGES:*** Version 3.0.0 is here. It includes _a lot_ of changes including a few that break the API. We've tried to break as few things as possible, so there should just be a few type signature changes. A full list of breaking changes is available in `VERSION_HISTORY.md`. See `MIGRATION_GUIDE.md` for more information on updating your code. A [go](http://www.golang.org) (or 'golang' for search engine friendliness) implementation of [JSON Web Tokens](http://self-issued.info/docs/draft-ietf-oauth-json-web-token.html)
**NOTICE:** A vulnerability in JWT was [recently published](https://auth0.com/blog/2015/03/31/critical-vulnerabilities-in-json-web-token-libraries/). As this library doesn't force users to validate the `alg` is what they expected, it's possible your usage is effected. There will be an update soon to remedy this, and it will likey require backwards-incompatible changes to the API. In the short term, please make sure your implementation verifies the `alg` is what you expect. **NEW VERSION COMING:** There have been a lot of improvements suggested since the version 3.0.0 released in 2016. I'm working now on cutting two different releases: 3.2.0 will contain any non-breaking changes or enhancements. 4.0.0 will follow shortly which will include breaking changes. See the 4.0.0 milestone to get an idea of what's coming. If you have other ideas, or would like to participate in 4.0.0, now's the time. If you depend on this library and don't want to be interrupted, I recommend you use your dependency mangement tool to pin to version 3.
**SECURITY NOTICE:** Some older versions of Go have a security issue in the cryotp/elliptic. Recommendation is to upgrade to at least 1.8.3. See issue #216 for more detail.
**SECURITY NOTICE:** It's important that you [validate the `alg` presented is what you expect](https://auth0.com/blog/2015/03/31/critical-vulnerabilities-in-json-web-token-libraries/). This library attempts to make it easy to do the right thing by requiring key types match the expected alg, but you should take the extra step to verify it in your usage. See the examples provided.
## What the heck is a JWT? ## What the heck is a JWT?
@ -25,8 +29,8 @@ This library supports the parsing and verification as well as the generation and
See [the project documentation](https://godoc.org/github.com/dgrijalva/jwt-go) for examples of usage: See [the project documentation](https://godoc.org/github.com/dgrijalva/jwt-go) for examples of usage:
* [Simple example of parsing and validating a token](https://godoc.org/github.com/dgrijalva/jwt-go#example_Parse_hmac) * [Simple example of parsing and validating a token](https://godoc.org/github.com/dgrijalva/jwt-go#example-Parse--Hmac)
* [Simple example of building and signing a token](https://godoc.org/github.com/dgrijalva/jwt-go#example_New_hmac) * [Simple example of building and signing a token](https://godoc.org/github.com/dgrijalva/jwt-go#example-New--Hmac)
* [Directory of Examples](https://godoc.org/github.com/dgrijalva/jwt-go#pkg-examples) * [Directory of Examples](https://godoc.org/github.com/dgrijalva/jwt-go#pkg-examples)
## Extensions ## Extensions
@ -47,7 +51,10 @@ This library is considered production ready. Feedback and feature requests are
This project uses [Semantic Versioning 2.0.0](http://semver.org). Accepted pull requests will land on `master`. Periodically, versions will be tagged from `master`. You can find all the releases on [the project releases page](https://github.com/dgrijalva/jwt-go/releases). This project uses [Semantic Versioning 2.0.0](http://semver.org). Accepted pull requests will land on `master`. Periodically, versions will be tagged from `master`. You can find all the releases on [the project releases page](https://github.com/dgrijalva/jwt-go/releases).
While we try to make it obvious when we make breaking changes, there isn't a great mechanism for pushing announcements out to users. You may want to use this alternative package include: `gopkg.in/dgrijalva/jwt-go.v2`. It will do the right thing WRT semantic versioning. While we try to make it obvious when we make breaking changes, there isn't a great mechanism for pushing announcements out to users. You may want to use this alternative package include: `gopkg.in/dgrijalva/jwt-go.v3`. It will do the right thing WRT semantic versioning.
**BREAKING CHANGES:***
* Version 3.0.0 includes _a lot_ of changes from the 2.x line, including a few that break the API. We've tried to break as few things as possible, so there should just be a few type signature changes. A full list of breaking changes is available in `VERSION_HISTORY.md`. See `MIGRATION_GUIDE.md` for more information on updating your code.
## Usage Tips ## Usage Tips
@ -68,13 +75,21 @@ Symmetric signing methods, such as HSA, use only a single secret. This is probab
Asymmetric signing methods, such as RSA, use different keys for signing and verifying tokens. This makes it possible to produce tokens with a private key, and allow any consumer to access the public key for verification. Asymmetric signing methods, such as RSA, use different keys for signing and verifying tokens. This makes it possible to produce tokens with a private key, and allow any consumer to access the public key for verification.
### Signing Methods and Key Types
Each signing method expects a different object type for its signing keys. See the package documentation for details. Here are the most common ones:
* The [HMAC signing method](https://godoc.org/github.com/dgrijalva/jwt-go#SigningMethodHMAC) (`HS256`,`HS384`,`HS512`) expect `[]byte` values for signing and validation
* The [RSA signing method](https://godoc.org/github.com/dgrijalva/jwt-go#SigningMethodRSA) (`RS256`,`RS384`,`RS512`) expect `*rsa.PrivateKey` for signing and `*rsa.PublicKey` for validation
* The [ECDSA signing method](https://godoc.org/github.com/dgrijalva/jwt-go#SigningMethodECDSA) (`ES256`,`ES384`,`ES512`) expect `*ecdsa.PrivateKey` for signing and `*ecdsa.PublicKey` for validation
### JWT and OAuth ### JWT and OAuth
It's worth mentioning that OAuth and JWT are not the same thing. A JWT token is simply a signed JSON object. It can be used anywhere such a thing is useful. There is some confusion, though, as JWT is the most common type of bearer token used in OAuth2 authentication. It's worth mentioning that OAuth and JWT are not the same thing. A JWT token is simply a signed JSON object. It can be used anywhere such a thing is useful. There is some confusion, though, as JWT is the most common type of bearer token used in OAuth2 authentication.
Without going too far down the rabbit hole, here's a description of the interaction of these technologies: Without going too far down the rabbit hole, here's a description of the interaction of these technologies:
* OAuth is a protocol for allowing an identity provider to be separate from the service a user is logging in to. For example, whenever you use Facebook to log into a different service (Yelp, Spotify, etc), you are using OAuth. * OAuth is a protocol for allowing an identity provider to be separate from the service a user is logging in to. For example, whenever you use Facebook to log into a different service (Yelp, Spotify, etc), you are using OAuth.
* OAuth defines several options for passing around authentication data. One popular method is called a "bearer token". A bearer token is simply a string that _should_ only be held by an authenticated user. Thus, simply presenting this token proves your identity. You can probably derive from here why a JWT might make a good bearer token. * OAuth defines several options for passing around authentication data. One popular method is called a "bearer token". A bearer token is simply a string that _should_ only be held by an authenticated user. Thus, simply presenting this token proves your identity. You can probably derive from here why a JWT might make a good bearer token.
* Because bearer tokens are used for authentication, it's important they're kept secret. This is why transactions that use bearer tokens typically happen over SSL. * Because bearer tokens are used for authentication, it's important they're kept secret. This is why transactions that use bearer tokens typically happen over SSL.
@ -82,4 +97,4 @@ Without going too far down the rabbit hole, here's a description of the interact
Documentation can be found [on godoc.org](http://godoc.org/github.com/dgrijalva/jwt-go). Documentation can be found [on godoc.org](http://godoc.org/github.com/dgrijalva/jwt-go).
The command line utility included in this project (cmd/jwt) provides a straightforward example of token creation and parsing as well as a useful tool for debugging your own integration. You'll also find several implementation examples in to documentation. The command line utility included in this project (cmd/jwt) provides a straightforward example of token creation and parsing as well as a useful tool for debugging your own integration. You'll also find several implementation examples in the documentation.

View File

@ -1,5 +1,18 @@
## `jwt-go` Version History ## `jwt-go` Version History
#### 3.2.0
* Added method `ParseUnverified` to allow users to split up the tasks of parsing and validation
* HMAC signing method returns `ErrInvalidKeyType` instead of `ErrInvalidKey` where appropriate
* Added options to `request.ParseFromRequest`, which allows for an arbitrary list of modifiers to parsing behavior. Initial set include `WithClaims` and `WithParser`. Existing usage of this function will continue to work as before.
* Deprecated `ParseFromRequestWithClaims` to simplify API in the future.
#### 3.1.0
* Improvements to `jwt` command line tool
* Added `SkipClaimsValidation` option to `Parser`
* Documentation updates
#### 3.0.0 #### 3.0.0
* **Compatibility Breaking Changes**: See MIGRATION_GUIDE.md for tips on updating your code * **Compatibility Breaking Changes**: See MIGRATION_GUIDE.md for tips on updating your code

View File

@ -14,6 +14,7 @@ var (
) )
// Implements the ECDSA family of signing methods signing methods // Implements the ECDSA family of signing methods signing methods
// Expects *ecdsa.PrivateKey for signing and *ecdsa.PublicKey for verification
type SigningMethodECDSA struct { type SigningMethodECDSA struct {
Name string Name string
Hash crypto.Hash Hash crypto.Hash

View File

@ -51,13 +51,9 @@ func (e ValidationError) Error() string {
} else { } else {
return "token is invalid" return "token is invalid"
} }
return e.Inner.Error()
} }
// No errors // No errors
func (e *ValidationError) valid() bool { func (e *ValidationError) valid() bool {
if e.Errors > 0 { return e.Errors == 0
return false
}
return true
} }

View File

@ -7,6 +7,7 @@ import (
) )
// Implements the HMAC-SHA family of signing methods signing methods // Implements the HMAC-SHA family of signing methods signing methods
// Expects key type of []byte for both signing and validation
type SigningMethodHMAC struct { type SigningMethodHMAC struct {
Name string Name string
Hash crypto.Hash Hash crypto.Hash
@ -90,5 +91,5 @@ func (m *SigningMethodHMAC) Sign(signingString string, key interface{}) (string,
return EncodeSegment(hasher.Sum(nil)), nil return EncodeSegment(hasher.Sum(nil)), nil
} }
return "", ErrInvalidKey return "", ErrInvalidKeyType
} }

View File

@ -8,8 +8,9 @@ import (
) )
type Parser struct { type Parser struct {
ValidMethods []string // If populated, only these methods will be considered valid ValidMethods []string // If populated, only these methods will be considered valid
UseJSONNumber bool // Use JSON Number format in JSON decoder UseJSONNumber bool // Use JSON Number format in JSON decoder
SkipClaimsValidation bool // Skip claims validation during token parsing
} }
// Parse, validate, and return a token. // Parse, validate, and return a token.
@ -20,55 +21,9 @@ func (p *Parser) Parse(tokenString string, keyFunc Keyfunc) (*Token, error) {
} }
func (p *Parser) ParseWithClaims(tokenString string, claims Claims, keyFunc Keyfunc) (*Token, error) { func (p *Parser) ParseWithClaims(tokenString string, claims Claims, keyFunc Keyfunc) (*Token, error) {
parts := strings.Split(tokenString, ".") token, parts, err := p.ParseUnverified(tokenString, claims)
if len(parts) != 3 {
return nil, NewValidationError("token contains an invalid number of segments", ValidationErrorMalformed)
}
var err error
token := &Token{Raw: tokenString}
// parse Header
var headerBytes []byte
if headerBytes, err = DecodeSegment(parts[0]); err != nil {
if strings.HasPrefix(strings.ToLower(tokenString), "bearer ") {
return token, NewValidationError("tokenstring should not contain 'bearer '", ValidationErrorMalformed)
}
return token, &ValidationError{Inner: err, Errors: ValidationErrorMalformed}
}
if err = json.Unmarshal(headerBytes, &token.Header); err != nil {
return token, &ValidationError{Inner: err, Errors: ValidationErrorMalformed}
}
// parse Claims
var claimBytes []byte
token.Claims = claims
if claimBytes, err = DecodeSegment(parts[1]); err != nil {
return token, &ValidationError{Inner: err, Errors: ValidationErrorMalformed}
}
dec := json.NewDecoder(bytes.NewBuffer(claimBytes))
if p.UseJSONNumber {
dec.UseNumber()
}
// JSON Decode. Special case for map type to avoid weird pointer behavior
if c, ok := token.Claims.(MapClaims); ok {
err = dec.Decode(&c)
} else {
err = dec.Decode(&claims)
}
// Handle decode error
if err != nil { if err != nil {
return token, &ValidationError{Inner: err, Errors: ValidationErrorMalformed} return token, err
}
// Lookup signature method
if method, ok := token.Header["alg"].(string); ok {
if token.Method = GetSigningMethod(method); token.Method == nil {
return token, NewValidationError("signing method (alg) is unavailable.", ValidationErrorUnverifiable)
}
} else {
return token, NewValidationError("signing method (alg) is unspecified.", ValidationErrorUnverifiable)
} }
// Verify signing method is in the required set // Verify signing method is in the required set
@ -95,20 +50,25 @@ func (p *Parser) ParseWithClaims(tokenString string, claims Claims, keyFunc Keyf
} }
if key, err = keyFunc(token); err != nil { if key, err = keyFunc(token); err != nil {
// keyFunc returned an error // keyFunc returned an error
if ve, ok := err.(*ValidationError); ok {
return token, ve
}
return token, &ValidationError{Inner: err, Errors: ValidationErrorUnverifiable} return token, &ValidationError{Inner: err, Errors: ValidationErrorUnverifiable}
} }
vErr := &ValidationError{} vErr := &ValidationError{}
// Validate Claims // Validate Claims
if err := token.Claims.Valid(); err != nil { if !p.SkipClaimsValidation {
if err := token.Claims.Valid(); err != nil {
// If the Claims Valid returned an error, check if it is a validation error, // If the Claims Valid returned an error, check if it is a validation error,
// If it was another error type, create a ValidationError with a generic ClaimsInvalid flag set // If it was another error type, create a ValidationError with a generic ClaimsInvalid flag set
if e, ok := err.(*ValidationError); !ok { if e, ok := err.(*ValidationError); !ok {
vErr = &ValidationError{Inner: err, Errors: ValidationErrorClaimsInvalid} vErr = &ValidationError{Inner: err, Errors: ValidationErrorClaimsInvalid}
} else { } else {
vErr = e vErr = e
}
} }
} }
@ -126,3 +86,63 @@ func (p *Parser) ParseWithClaims(tokenString string, claims Claims, keyFunc Keyf
return token, vErr return token, vErr
} }
// WARNING: Don't use this method unless you know what you're doing
//
// This method parses the token but doesn't validate the signature. It's only
// ever useful in cases where you know the signature is valid (because it has
// been checked previously in the stack) and you want to extract values from
// it.
func (p *Parser) ParseUnverified(tokenString string, claims Claims) (token *Token, parts []string, err error) {
parts = strings.Split(tokenString, ".")
if len(parts) != 3 {
return nil, parts, NewValidationError("token contains an invalid number of segments", ValidationErrorMalformed)
}
token = &Token{Raw: tokenString}
// parse Header
var headerBytes []byte
if headerBytes, err = DecodeSegment(parts[0]); err != nil {
if strings.HasPrefix(strings.ToLower(tokenString), "bearer ") {
return token, parts, NewValidationError("tokenstring should not contain 'bearer '", ValidationErrorMalformed)
}
return token, parts, &ValidationError{Inner: err, Errors: ValidationErrorMalformed}
}
if err = json.Unmarshal(headerBytes, &token.Header); err != nil {
return token, parts, &ValidationError{Inner: err, Errors: ValidationErrorMalformed}
}
// parse Claims
var claimBytes []byte
token.Claims = claims
if claimBytes, err = DecodeSegment(parts[1]); err != nil {
return token, parts, &ValidationError{Inner: err, Errors: ValidationErrorMalformed}
}
dec := json.NewDecoder(bytes.NewBuffer(claimBytes))
if p.UseJSONNumber {
dec.UseNumber()
}
// JSON Decode. Special case for map type to avoid weird pointer behavior
if c, ok := token.Claims.(MapClaims); ok {
err = dec.Decode(&c)
} else {
err = dec.Decode(&claims)
}
// Handle decode error
if err != nil {
return token, parts, &ValidationError{Inner: err, Errors: ValidationErrorMalformed}
}
// Lookup signature method
if method, ok := token.Header["alg"].(string); ok {
if token.Method = GetSigningMethod(method); token.Method == nil {
return token, parts, NewValidationError("signing method (alg) is unavailable.", ValidationErrorUnverifiable)
}
} else {
return token, parts, NewValidationError("signing method (alg) is unspecified.", ValidationErrorUnverifiable)
}
return token, parts, nil
}

View File

@ -7,6 +7,7 @@ import (
) )
// Implements the RSA family of signing methods signing methods // Implements the RSA family of signing methods signing methods
// Expects *rsa.PrivateKey for signing and *rsa.PublicKey for validation
type SigningMethodRSA struct { type SigningMethodRSA struct {
Name string Name string
Hash crypto.Hash Hash crypto.Hash
@ -44,7 +45,7 @@ func (m *SigningMethodRSA) Alg() string {
} }
// Implements the Verify method from SigningMethod // Implements the Verify method from SigningMethod
// For this signing method, must be an rsa.PublicKey structure. // For this signing method, must be an *rsa.PublicKey structure.
func (m *SigningMethodRSA) Verify(signingString, signature string, key interface{}) error { func (m *SigningMethodRSA) Verify(signingString, signature string, key interface{}) error {
var err error var err error
@ -73,7 +74,7 @@ func (m *SigningMethodRSA) Verify(signingString, signature string, key interface
} }
// Implements the Sign method from SigningMethod // Implements the Sign method from SigningMethod
// For this signing method, must be an rsa.PrivateKey structure. // For this signing method, must be an *rsa.PrivateKey structure.
func (m *SigningMethodRSA) Sign(signingString string, key interface{}) (string, error) { func (m *SigningMethodRSA) Sign(signingString string, key interface{}) (string, error) {
var rsaKey *rsa.PrivateKey var rsaKey *rsa.PrivateKey
var ok bool var ok bool

View File

@ -39,6 +39,38 @@ func ParseRSAPrivateKeyFromPEM(key []byte) (*rsa.PrivateKey, error) {
return pkey, nil return pkey, nil
} }
// Parse PEM encoded PKCS1 or PKCS8 private key protected with password
func ParseRSAPrivateKeyFromPEMWithPassword(key []byte, password string) (*rsa.PrivateKey, error) {
var err error
// Parse PEM block
var block *pem.Block
if block, _ = pem.Decode(key); block == nil {
return nil, ErrKeyMustBePEMEncoded
}
var parsedKey interface{}
var blockDecrypted []byte
if blockDecrypted, err = x509.DecryptPEMBlock(block, []byte(password)); err != nil {
return nil, err
}
if parsedKey, err = x509.ParsePKCS1PrivateKey(blockDecrypted); err != nil {
if parsedKey, err = x509.ParsePKCS8PrivateKey(blockDecrypted); err != nil {
return nil, err
}
}
var pkey *rsa.PrivateKey
var ok bool
if pkey, ok = parsedKey.(*rsa.PrivateKey); !ok {
return nil, ErrNotRSAPrivateKey
}
return pkey, nil
}
// Parse PEM encoded PKCS1 or PKCS8 public key // Parse PEM encoded PKCS1 or PKCS8 public key
func ParseRSAPublicKeyFromPEM(key []byte) (*rsa.PublicKey, error) { func ParseRSAPublicKeyFromPEM(key []byte) (*rsa.PublicKey, error) {
var err error var err error

View File

@ -1,148 +0,0 @@
# Tips
## Implementing default logging and re-authentication attempts
You can implement custom logging and/or limit re-auth attempts by creating a custom HTTP client
like the following and setting it as the provider client's HTTP Client (via the
`gophercloud.ProviderClient.HTTPClient` field):
```go
//...
// LogRoundTripper satisfies the http.RoundTripper interface and is used to
// customize the default Gophercloud RoundTripper to allow for logging.
type LogRoundTripper struct {
rt http.RoundTripper
numReauthAttempts int
}
// newHTTPClient return a custom HTTP client that allows for logging relevant
// information before and after the HTTP request.
func newHTTPClient() http.Client {
return http.Client{
Transport: &LogRoundTripper{
rt: http.DefaultTransport,
},
}
}
// RoundTrip performs a round-trip HTTP request and logs relevant information about it.
func (lrt *LogRoundTripper) RoundTrip(request *http.Request) (*http.Response, error) {
glog.Infof("Request URL: %s\n", request.URL)
response, err := lrt.rt.RoundTrip(request)
if response == nil {
return nil, err
}
if response.StatusCode == http.StatusUnauthorized {
if lrt.numReauthAttempts == 3 {
return response, fmt.Errorf("Tried to re-authenticate 3 times with no success.")
}
lrt.numReauthAttempts++
}
glog.Debugf("Response Status: %s\n", response.Status)
return response, nil
}
endpoint := "https://127.0.0.1/auth"
pc := openstack.NewClient(endpoint)
pc.HTTPClient = newHTTPClient()
//...
```
## Implementing custom objects
OpenStack request/response objects may differ among variable names or types.
### Custom request objects
To pass custom options to a request, implement the desired `<ACTION>OptsBuilder` interface. For
example, to pass in
```go
type MyCreateServerOpts struct {
Name string
Size int
}
```
to `servers.Create`, simply implement the `servers.CreateOptsBuilder` interface:
```go
func (o MyCreateServeropts) ToServerCreateMap() (map[string]interface{}, error) {
return map[string]interface{}{
"name": o.Name,
"size": o.Size,
}, nil
}
```
create an instance of your custom options object, and pass it to `servers.Create`:
```go
// ...
myOpts := MyCreateServerOpts{
Name: "s1",
Size: "100",
}
server, err := servers.Create(computeClient, myOpts).Extract()
// ...
```
### Custom response objects
Some OpenStack services have extensions. Extensions that are supported in Gophercloud can be
combined to create a custom object:
```go
// ...
type MyVolume struct {
volumes.Volume
tenantattr.VolumeExt
}
var v struct {
MyVolume `json:"volume"`
}
err := volumes.Get(client, volID).ExtractInto(&v)
// ...
```
## Overriding default `UnmarshalJSON` method
For some response objects, a field may be a custom type or may be allowed to take on
different types. In these cases, overriding the default `UnmarshalJSON` method may be
necessary. To do this, declare the JSON `struct` field tag as "-" and create an `UnmarshalJSON`
method on the type:
```go
// ...
type MyVolume struct {
ID string `json: "id"`
TimeCreated time.Time `json: "-"`
}
func (r *MyVolume) UnmarshalJSON(b []byte) error {
type tmp MyVolume
var s struct {
tmp
TimeCreated gophercloud.JSONRFC3339MilliNoZ `json:"created_at"`
}
err := json.Unmarshal(b, &s)
if err != nil {
return err
}
*r = Volume(s.tmp)
r.TimeCreated = time.Time(s.CreatedAt)
return err
}
// ...
```

View File

@ -1,32 +0,0 @@
# Compute
## Floating IPs
* `github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/floatingip` is now `github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/floatingips`
* `floatingips.Associate` and `floatingips.Disassociate` have been removed.
* `floatingips.DisassociateOpts` is now required to disassociate a Floating IP.
## Security Groups
* `secgroups.AddServerToGroup` is now `secgroups.AddServer`.
* `secgroups.RemoveServerFromGroup` is now `secgroups.RemoveServer`.
## Servers
* `servers.Reboot` now requires a `servers.RebootOpts` struct:
```golang
rebootOpts := &servers.RebootOpts{
Type: servers.SoftReboot,
}
res := servers.Reboot(client, server.ID, rebootOpts)
```
# Identity
## V3
### Tokens
* `Token.ExpiresAt` is now of type `gophercloud.JSONRFC3339Milli` instead of
`time.Time`

View File

@ -127,7 +127,7 @@ new resource in the `server` variable (a
## Advanced Usage ## Advanced Usage
Have a look at the [FAQ](./FAQ.md) for some tips on customizing the way Gophercloud works. Have a look at the [FAQ](./docs/FAQ.md) for some tips on customizing the way Gophercloud works.
## Backwards-Compatibility Guarantees ## Backwards-Compatibility Guarantees
@ -141,3 +141,19 @@ See the [contributing guide](./.github/CONTRIBUTING.md).
If you're struggling with something or have spotted a potential bug, feel free If you're struggling with something or have spotted a potential bug, feel free
to submit an issue to our [bug tracker](/issues). to submit an issue to our [bug tracker](/issues).
## Thank You
We'd like to extend special thanks and appreciation to the following:
### OpenLab
<a href="http://openlabtesting.org/"><img src="./docs/assets/openlab.png" width="600px"></a>
OpenLab is providing a full CI environment to test each PR and merge for a variety of OpenStack releases.
### VEXXHOST
<a href="https://vexxhost.com/"><img src="./docs/assets/vexxhost.png" width="600px"></a>
VEXXHOST is providing their services to assist with the development and testing of Gophercloud.

View File

@ -1,74 +0,0 @@
## On Pull Requests
- Before you start a PR there needs to be a Github issue and a discussion about it
on that issue with a core contributor, even if it's just a 'SGTM'.
- A PR's description must reference the issue it closes with a `For <ISSUE NUMBER>` (e.g. For #293).
- A PR's description must contain link(s) to the line(s) in the OpenStack
source code (on Github) that prove(s) the PR code to be valid. Links to documentation
are not good enough. The link(s) should be to a non-`master` branch. For example,
a pull request implementing the creation of a Neutron v2 subnet might put the
following link in the description:
https://github.com/openstack/neutron/blob/stable/mitaka/neutron/api/v2/attributes.py#L749
From that link, a reviewer (or user) can verify the fields in the request/response
objects in the PR.
- A PR that is in-progress should have `[wip]` in front of the PR's title. When
ready for review, remove the `[wip]` and ping a core contributor with an `@`.
- Forcing PRs to be small can have the effect of users submitting PRs in a hierarchical chain, with
one depending on the next. If a PR depends on another one, it should have a [Pending #PRNUM]
prefix in the PR title. In addition, it will be the PR submitter's responsibility to remove the
[Pending #PRNUM] tag once the PR has been updated with the merged, dependent PR. That will
let reviewers know it is ready to review.
- A PR should be small. Even if you intend on implementing an entire
service, a PR should only be one route of that service
(e.g. create server or get server, but not both).
- Unless explicitly asked, do not squash commits in the middle of a review; only
append. It makes it difficult for the reviewer to see what's changed from one
review to the next.
## On Code
- In re design: follow as closely as is reasonable the code already in the library.
Most operations (e.g. create, delete) admit the same design.
- Unit tests and acceptance (integration) tests must be written to cover each PR.
Tests for operations with several options (e.g. list, create) should include all
the options in the tests. This will allow users to verify an operation on their
own infrastructure and see an example of usage.
- If in doubt, ask in-line on the PR.
### File Structure
- The following should be used in most cases:
- `requests.go`: contains all the functions that make HTTP requests and the
types associated with the HTTP request (parameters for URL, body, etc)
- `results.go`: contains all the response objects and their methods
- `urls.go`: contains the endpoints to which the requests are made
### Naming
- For methods on a type in `results.go`, the receiver should be named `r` and the
variable into which it will be unmarshalled `s`.
- Functions in `requests.go`, with the exception of functions that return a
`pagination.Pager`, should be named returns of the name `r`.
- Functions in `requests.go` that accept request bodies should accept as their
last parameter an `interface` named `<Action>OptsBuilder` (eg `CreateOptsBuilder`).
This `interface` should have at the least a method named `To<Resource><Action>Map`
(eg `ToPortCreateMap`).
- Functions in `requests.go` that accept query strings should accept as their
last parameter an `interface` named `<Action>OptsBuilder` (eg `ListOptsBuilder`).
This `interface` should have at the least a method named `To<Resource><Action>Query`
(eg `ToServerListQuery`).

View File

@ -9,12 +9,32 @@ ProviderClient representing an active session on that provider.
Its fields are the union of those recognized by each identity implementation and Its fields are the union of those recognized by each identity implementation and
provider. provider.
An example of manually providing authentication information:
opts := gophercloud.AuthOptions{
IdentityEndpoint: "https://openstack.example.com:5000/v2.0",
Username: "{username}",
Password: "{password}",
TenantID: "{tenant_id}",
}
provider, err := openstack.AuthenticatedClient(opts)
An example of using AuthOptionsFromEnv(), where the environment variables can
be read from a file, such as a standard openrc file:
opts, err := openstack.AuthOptionsFromEnv()
provider, err := openstack.AuthenticatedClient(opts)
*/ */
type AuthOptions struct { type AuthOptions struct {
// IdentityEndpoint specifies the HTTP endpoint that is required to work with // IdentityEndpoint specifies the HTTP endpoint that is required to work with
// the Identity API of the appropriate version. While it's ultimately needed by // the Identity API of the appropriate version. While it's ultimately needed by
// all of the identity services, it will often be populated by a provider-level // all of the identity services, it will often be populated by a provider-level
// function. // function.
//
// The IdentityEndpoint is typically referred to as the "auth_url" or
// "OS_AUTH_URL" in the information provided by the cloud operator.
IdentityEndpoint string `json:"-"` IdentityEndpoint string `json:"-"`
// Username is required if using Identity V2 API. Consult with your provider's // Username is required if using Identity V2 API. Consult with your provider's
@ -39,7 +59,7 @@ type AuthOptions struct {
// If DomainID or DomainName are provided, they will also apply to TenantName. // If DomainID or DomainName are provided, they will also apply to TenantName.
// It is not currently possible to authenticate with Username and a Domain // It is not currently possible to authenticate with Username and a Domain
// and scope to a Project in a different Domain by using TenantName. To // and scope to a Project in a different Domain by using TenantName. To
// accomplish that, the ProjectID will need to be provided to the TenantID // accomplish that, the ProjectID will need to be provided as the TenantID
// option. // option.
TenantID string `json:"tenantId,omitempty"` TenantID string `json:"tenantId,omitempty"`
TenantName string `json:"tenantName,omitempty"` TenantName string `json:"tenantName,omitempty"`
@ -50,15 +70,28 @@ type AuthOptions struct {
// false, it will not cache these settings, but re-authentication will not be // false, it will not cache these settings, but re-authentication will not be
// possible. This setting defaults to false. // possible. This setting defaults to false.
// //
// NOTE: The reauth function will try to re-authenticate endlessly if left unchecked. // NOTE: The reauth function will try to re-authenticate endlessly if left
// The way to limit the number of attempts is to provide a custom HTTP client to the provider client // unchecked. The way to limit the number of attempts is to provide a custom
// and provide a transport that implements the RoundTripper interface and stores the number of failed retries. // HTTP client to the provider client and provide a transport that implements
// For an example of this, see here: https://github.com/rackspace/rack/blob/1.0.0/auth/clients.go#L311 // the RoundTripper interface and stores the number of failed retries. For an
// example of this, see here:
// https://github.com/rackspace/rack/blob/1.0.0/auth/clients.go#L311
AllowReauth bool `json:"-"` AllowReauth bool `json:"-"`
// TokenID allows users to authenticate (possibly as another user) with an // TokenID allows users to authenticate (possibly as another user) with an
// authentication token ID. // authentication token ID.
TokenID string `json:"-"` TokenID string `json:"-"`
// Scope determines the scoping of the authentication request.
Scope *AuthScope `json:"-"`
}
// AuthScope allows a created token to be limited to a specific domain or project.
type AuthScope struct {
ProjectID string
ProjectName string
DomainID string
DomainName string
} }
// ToTokenV2CreateMap allows AuthOptions to satisfy the AuthOptionsBuilder // ToTokenV2CreateMap allows AuthOptions to satisfy the AuthOptionsBuilder
@ -241,82 +274,85 @@ func (opts *AuthOptions) ToTokenV3CreateMap(scope map[string]interface{}) (map[s
} }
func (opts *AuthOptions) ToTokenV3ScopeMap() (map[string]interface{}, error) { func (opts *AuthOptions) ToTokenV3ScopeMap() (map[string]interface{}, error) {
// For backwards compatibility.
var scope struct { // If AuthOptions.Scope was not set, try to determine it.
ProjectID string // This works well for common scenarios.
ProjectName string if opts.Scope == nil {
DomainID string opts.Scope = new(AuthScope)
DomainName string if opts.TenantID != "" {
} opts.Scope.ProjectID = opts.TenantID
} else {
if opts.TenantID != "" { if opts.TenantName != "" {
scope.ProjectID = opts.TenantID opts.Scope.ProjectName = opts.TenantName
} else { opts.Scope.DomainID = opts.DomainID
if opts.TenantName != "" { opts.Scope.DomainName = opts.DomainName
scope.ProjectName = opts.TenantName }
scope.DomainID = opts.DomainID
scope.DomainName = opts.DomainName
} }
} }
if scope.ProjectName != "" { if opts.Scope.ProjectName != "" {
// ProjectName provided: either DomainID or DomainName must also be supplied. // ProjectName provided: either DomainID or DomainName must also be supplied.
// ProjectID may not be supplied. // ProjectID may not be supplied.
if scope.DomainID == "" && scope.DomainName == "" { if opts.Scope.DomainID == "" && opts.Scope.DomainName == "" {
return nil, ErrScopeDomainIDOrDomainName{} return nil, ErrScopeDomainIDOrDomainName{}
} }
if scope.ProjectID != "" { if opts.Scope.ProjectID != "" {
return nil, ErrScopeProjectIDOrProjectName{} return nil, ErrScopeProjectIDOrProjectName{}
} }
if scope.DomainID != "" { if opts.Scope.DomainID != "" {
// ProjectName + DomainID // ProjectName + DomainID
return map[string]interface{}{ return map[string]interface{}{
"project": map[string]interface{}{ "project": map[string]interface{}{
"name": &scope.ProjectName, "name": &opts.Scope.ProjectName,
"domain": map[string]interface{}{"id": &scope.DomainID}, "domain": map[string]interface{}{"id": &opts.Scope.DomainID},
}, },
}, nil }, nil
} }
if scope.DomainName != "" { if opts.Scope.DomainName != "" {
// ProjectName + DomainName // ProjectName + DomainName
return map[string]interface{}{ return map[string]interface{}{
"project": map[string]interface{}{ "project": map[string]interface{}{
"name": &scope.ProjectName, "name": &opts.Scope.ProjectName,
"domain": map[string]interface{}{"name": &scope.DomainName}, "domain": map[string]interface{}{"name": &opts.Scope.DomainName},
}, },
}, nil }, nil
} }
} else if scope.ProjectID != "" { } else if opts.Scope.ProjectID != "" {
// ProjectID provided. ProjectName, DomainID, and DomainName may not be provided. // ProjectID provided. ProjectName, DomainID, and DomainName may not be provided.
if scope.DomainID != "" { if opts.Scope.DomainID != "" {
return nil, ErrScopeProjectIDAlone{} return nil, ErrScopeProjectIDAlone{}
} }
if scope.DomainName != "" { if opts.Scope.DomainName != "" {
return nil, ErrScopeProjectIDAlone{} return nil, ErrScopeProjectIDAlone{}
} }
// ProjectID // ProjectID
return map[string]interface{}{ return map[string]interface{}{
"project": map[string]interface{}{ "project": map[string]interface{}{
"id": &scope.ProjectID, "id": &opts.Scope.ProjectID,
}, },
}, nil }, nil
} else if scope.DomainID != "" { } else if opts.Scope.DomainID != "" {
// DomainID provided. ProjectID, ProjectName, and DomainName may not be provided. // DomainID provided. ProjectID, ProjectName, and DomainName may not be provided.
if scope.DomainName != "" { if opts.Scope.DomainName != "" {
return nil, ErrScopeDomainIDOrDomainName{} return nil, ErrScopeDomainIDOrDomainName{}
} }
// DomainID // DomainID
return map[string]interface{}{ return map[string]interface{}{
"domain": map[string]interface{}{ "domain": map[string]interface{}{
"id": &scope.DomainID, "id": &opts.Scope.DomainID,
},
}, nil
} else if opts.Scope.DomainName != "" {
// DomainName
return map[string]interface{}{
"domain": map[string]interface{}{
"name": &opts.Scope.DomainName,
}, },
}, nil }, nil
} else if scope.DomainName != "" {
return nil, ErrScopeDomainName{}
} }
return nil, nil return nil, nil

View File

@ -3,11 +3,17 @@ Package gophercloud provides a multi-vendor interface to OpenStack-compatible
clouds. The library has a three-level hierarchy: providers, services, and clouds. The library has a three-level hierarchy: providers, services, and
resources. resources.
Provider structs represent the service providers that offer and manage a Authenticating with Providers
collection of services. The IdentityEndpoint is typically refered to as
"auth_url" in information provided by the cloud operator. Additionally, Provider structs represent the cloud providers that offer and manage a
the cloud may refer to TenantID or TenantName as project_id and project_name. collection of services. You will generally want to create one Provider
These are defined like so: client per OpenStack cloud.
Use your OpenStack credentials to create a Provider client. The
IdentityEndpoint is typically refered to as "auth_url" or "OS_AUTH_URL" in
information provided by the cloud operator. Additionally, the cloud may refer to
TenantID or TenantName as project_id and project_name. Credentials are
specified like so:
opts := gophercloud.AuthOptions{ opts := gophercloud.AuthOptions{
IdentityEndpoint: "https://openstack.example.com:5000/v2.0", IdentityEndpoint: "https://openstack.example.com:5000/v2.0",
@ -18,6 +24,16 @@ These are defined like so:
provider, err := openstack.AuthenticatedClient(opts) provider, err := openstack.AuthenticatedClient(opts)
You may also use the openstack.AuthOptionsFromEnv() helper function. This
function reads in standard environment variables frequently found in an
OpenStack `openrc` file. Again note that Gophercloud currently uses "tenant"
instead of "project".
opts, err := openstack.AuthOptionsFromEnv()
provider, err := openstack.AuthenticatedClient(opts)
Service Clients
Service structs are specific to a provider and handle all of the logic and Service structs are specific to a provider and handle all of the logic and
operations for a particular OpenStack service. Examples of services include: operations for a particular OpenStack service. Examples of services include:
Compute, Object Storage, Block Storage. In order to define one, you need to Compute, Object Storage, Block Storage. In order to define one, you need to
@ -27,6 +43,8 @@ pass in the parent provider, like so:
client := openstack.NewComputeV2(provider, opts) client := openstack.NewComputeV2(provider, opts)
Resources
Resource structs are the domain models that services make use of in order Resource structs are the domain models that services make use of in order
to work with and represent the state of API resources: to work with and represent the state of API resources:
@ -62,6 +80,12 @@ of results:
return true, nil return true, nil
}) })
If you want to obtain the entire collection of pages without doing any
intermediary processing on each page, you can use the AllPages method:
allPages, err := servers.List(client, nil).AllPages()
allServers, err := servers.ExtractServers(allPages)
This top-level package contains utility functions and data types that are used This top-level package contains utility functions and data types that are used
throughout the provider and service packages. Of particular note for end users throughout the provider and service packages. Of particular note for end users
are the AuthOptions and EndpointOpts structs. are the AuthOptions and EndpointOpts structs.

Some files were not shown because too many files have changed in this diff Show More