Merge remote-tracking branch 'upstream/master' into packer-builder-profitbricks
This commit is contained in:
commit
8f8907ee13
|
@ -12,7 +12,7 @@ FOR BUGS:
|
|||
|
||||
Describe the problem and include the following information:
|
||||
|
||||
- Packer Version
|
||||
- Packer version from `packer version`
|
||||
- Host platform
|
||||
- Debug log output from `PACKER_LOG=1 packer build template.json`.
|
||||
Please paste this in a gist https://gist.github.com
|
||||
|
|
17
CHANGELOG.md
17
CHANGELOG.md
|
@ -23,7 +23,10 @@ IMPROVEMENTS:
|
|||
* builder/azure: Support for custom images [GH-3575]
|
||||
* builder/azure: Removed superfluous polling code for deployments [GH-3638]
|
||||
* builder/azure: Made `tenant_id` optional [GH-3643]
|
||||
* builder/digitalocean: Use `state_timeout` for unlock and off transitions.
|
||||
[GH-3444]
|
||||
* builder/google: Added support for `image_family` [GH-3503]
|
||||
* builder/google: Use gcloud application default credentials. [GH-3655]
|
||||
* builder/null: Can now be used with WinRM [GH-2525]
|
||||
* builder/parallels: Now pauses between `boot_command` entries when running
|
||||
with `-debug` [GH-3547]
|
||||
|
@ -36,13 +39,18 @@ IMPROVEMENTS:
|
|||
* builder/vmware: Now paused between `boot_command` entries when running with
|
||||
`-debug` [GH-3542]
|
||||
* builder/vmware: Added `vnc_bind_address` option [GH-3565]
|
||||
* builder/vmware: Adds passwords for VNC [GH-2325]
|
||||
* builder/vmware: Handle connection to VM with more than one NIC on ESXi
|
||||
[GH-3347]
|
||||
* builder/qemu: Now pauses between `boot_command` entries when running with
|
||||
`-debug` [GH-3547]
|
||||
* provisioner/ansible: Improved logging and error handling [GH-3477]
|
||||
* provisioner/chef: Added `knife_command` option and added a correct default
|
||||
value for Windows [GH-3622]
|
||||
* provisioner/puppet: Added `execute_command` option [GH-3614]
|
||||
* post-processor/compress: Added support for bgzf compression [GH-3501]
|
||||
* post-processor/docker: Preserve tags when running docker push [GH-3631]
|
||||
* scripts: Added `help` target to Makefile [GH-3290]
|
||||
|
||||
BUG FIXES:
|
||||
|
||||
|
@ -50,8 +58,17 @@ BUG FIXES:
|
|||
* post-processor/vsphere: Fix upload failures with vsphere [GH-3321]
|
||||
* provisioner/ansible: Properly set host key checking even when a custom ENV
|
||||
is specified [GH-3568]
|
||||
* builder/amazon: Use `temporary_key_pair_name` when specified. [GH-3739]
|
||||
* builder/amazon: Add 0.5 cents to discovered spot price. [GH-3662]
|
||||
* builder/azure: check for empty resource group [GH-3606]
|
||||
* builder/azure: fix token validity test [GH-3609]
|
||||
* builder/virtualbox: Respect `ssh_host` [GH-3617]
|
||||
* builder/vmware: Re-introduce case sensitive VMX keys [GH-2707]
|
||||
* builder/vmware: Don't check for poweron errors on ESXi [GH-3195]
|
||||
* builder/vmware: Respect `ssh_host`/`winrm_host` on ESXi [GH-3738]
|
||||
* builder/vmware: Do not add remotedisplay.vnc.ip to VMX data on ESXi
|
||||
[GH-3740]
|
||||
* website: improved rendering on iPad [GH-3780]
|
||||
|
||||
## 0.10.1 (May 7, 2016)
|
||||
|
||||
|
|
|
@ -69,7 +69,8 @@ following steps in order to be able to compile and test Packer. These instructio
|
|||
`$GOPATH/src/github.com/mitchellh/packer`.
|
||||
|
||||
4. When working on packer `cd $GOPATH/src/github.com/mitchellh/packer` so you
|
||||
can run `make` and easily access other files.
|
||||
can run `make` and easily access other files. Run `make help` to get
|
||||
information about make targets.
|
||||
|
||||
5. Make your changes to the Packer source. You can run `make` in
|
||||
`$GOPATH/src/github.com/mitchellh/packer` to run tests and build the packer
|
||||
|
@ -137,4 +138,4 @@ sometimes take a very long time.
|
|||
|
||||
Acceptance tests typically require other environment variables to be set for
|
||||
things such as API tokens and keys. Each test should error and tell you which
|
||||
credentials are missing, so those are not documented here.
|
||||
credentials are missing, so those are not documented here.
|
||||
|
|
21
Makefile
21
Makefile
|
@ -9,9 +9,9 @@ default: deps generate test dev
|
|||
|
||||
ci: deps test
|
||||
|
||||
release: deps test releasebin package
|
||||
release: deps test releasebin package ## Build a release build
|
||||
|
||||
bin: deps
|
||||
bin: deps ## Build debug/test build
|
||||
@echo "WARN: 'make bin' is for debug / test builds only. Use 'make release' for release builds."
|
||||
@GO15VENDOREXPERIMENT=1 sh -c "$(CURDIR)/scripts/build.sh"
|
||||
|
||||
|
@ -35,14 +35,14 @@ deps:
|
|||
godep restore; \
|
||||
fi
|
||||
|
||||
dev: deps
|
||||
dev: deps ## Build and install a development build
|
||||
@grep 'const VersionPrerelease = ""' version/version.go > /dev/null ; if [ $$? -eq 0 ]; then \
|
||||
echo "ERROR: You must add prerelease tags to version/version.go prior to making a dev build."; \
|
||||
exit 1; \
|
||||
fi
|
||||
@PACKER_DEV=1 GO15VENDOREXPERIMENT=1 sh -c "$(CURDIR)/scripts/build.sh"
|
||||
|
||||
fmt:
|
||||
fmt: ## Format Go code
|
||||
go fmt `go list ./... | grep -v vendor`
|
||||
|
||||
# Install js-beautify with npm install -g js-beautify
|
||||
|
@ -51,11 +51,11 @@ fmt-examples:
|
|||
|
||||
# generate runs `go generate` to build the dynamically generated
|
||||
# source files.
|
||||
generate: deps
|
||||
generate: deps ## Generate dynamically generated code
|
||||
go generate .
|
||||
go fmt command/plugin.go
|
||||
|
||||
test: deps
|
||||
test: deps ## Run unit tests
|
||||
@go test $(TEST) $(TESTARGS) -timeout=2m
|
||||
@go tool vet $(VET) ; if [ $$? -eq 1 ]; then \
|
||||
echo "ERROR: Vet found problems in the code."; \
|
||||
|
@ -63,11 +63,11 @@ test: deps
|
|||
fi
|
||||
|
||||
# testacc runs acceptance tests
|
||||
testacc: deps generate
|
||||
testacc: deps generate ## Run acceptance tests
|
||||
@echo "WARN: Acceptance tests will take a long time to run and may cost money. Ctrl-C if you want to cancel."
|
||||
PACKER_ACC=1 go test -v $(TEST) $(TESTARGS) -timeout=45m
|
||||
|
||||
testrace: deps
|
||||
testrace: deps ## Test for race conditions
|
||||
@go test -race $(TEST) $(TESTARGS) -timeout=2m
|
||||
|
||||
updatedeps:
|
||||
|
@ -77,8 +77,11 @@ updatedeps:
|
|||
|
||||
# This is used to add new dependencies to packer. If you are submitting a PR
|
||||
# that includes new dependencies you will need to run this.
|
||||
vendor:
|
||||
vendor: ## Add new dependencies.
|
||||
godep restore
|
||||
godep save
|
||||
|
||||
help:
|
||||
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
|
||||
|
||||
.PHONY: bin checkversion ci default deps fmt fmt-examples generate releasebin test testacc testrace updatedeps
|
||||
|
|
|
@ -44,10 +44,14 @@ type RunConfig struct {
|
|||
}
|
||||
|
||||
func (c *RunConfig) Prepare(ctx *interpolate.Context) []error {
|
||||
// if we are not given an explicit keypairname, create a temporary one
|
||||
// If we are not given an explicit ssh_keypair_name,
|
||||
// then create a temporary one, but only if the
|
||||
// temporary_key_pair_name has not been provided.
|
||||
if c.SSHKeyPairName == "" {
|
||||
c.TemporaryKeyPairName = fmt.Sprintf(
|
||||
"packer %s", uuid.TimeOrderedUUID())
|
||||
if c.TemporaryKeyPairName == "" {
|
||||
c.TemporaryKeyPairName = fmt.Sprintf(
|
||||
"packer_%s", uuid.TimeOrderedUUID())
|
||||
}
|
||||
}
|
||||
|
||||
if c.WindowsPasswordTimeout == 0 {
|
||||
|
|
|
@ -3,6 +3,7 @@ package common
|
|||
import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"regexp"
|
||||
"testing"
|
||||
|
||||
"github.com/mitchellh/packer/helper/communicator"
|
||||
|
@ -140,6 +141,21 @@ func TestRunConfigPrepare_TemporaryKeyPairName(t *testing.T) {
|
|||
}
|
||||
|
||||
if c.TemporaryKeyPairName == "" {
|
||||
t.Fatal("keypair empty")
|
||||
t.Fatal("keypair name is empty")
|
||||
}
|
||||
|
||||
// Match prefix and UUID, e.g. "packer_5790d491-a0b8-c84c-c9d2-2aea55086550".
|
||||
r := regexp.MustCompile(`\Apacker_(?:(?i)[a-f\d]{8}(?:-[a-f\d]{4}){3}-[a-f\d]{12}?)\z`)
|
||||
if !r.MatchString(c.TemporaryKeyPairName) {
|
||||
t.Fatal("keypair name is not valid")
|
||||
}
|
||||
|
||||
c.TemporaryKeyPairName = "ssh-key-123"
|
||||
if err := c.Prepare(nil); len(err) != 0 {
|
||||
t.Fatalf("err: %s", err)
|
||||
}
|
||||
|
||||
if c.TemporaryKeyPairName != "ssh-key-123" {
|
||||
t.Fatal("keypair name does not match")
|
||||
}
|
||||
}
|
||||
|
|
|
@ -147,6 +147,10 @@ func (s *StepRunSourceInstance) Run(state multistep.StateBag) multistep.StepActi
|
|||
state.Put("error", err)
|
||||
ui.Error(err.Error())
|
||||
return multistep.ActionHalt
|
||||
} else {
|
||||
// Add 0.5 cents to minimum spot bid to ensure capacity will be available
|
||||
// Avoids price-too-low error in active markets which can fluctuate
|
||||
price = price + 0.005
|
||||
}
|
||||
|
||||
spotPrice = strconv.FormatFloat(price, 'f', -1, 64)
|
||||
|
@ -156,16 +160,16 @@ func (s *StepRunSourceInstance) Run(state multistep.StateBag) multistep.StepActi
|
|||
|
||||
if spotPrice == "" || spotPrice == "0" {
|
||||
runOpts := &ec2.RunInstancesInput{
|
||||
KeyName: &keyName,
|
||||
ImageId: &s.SourceAMI,
|
||||
InstanceType: &s.InstanceType,
|
||||
UserData: &userData,
|
||||
MaxCount: aws.Int64(1),
|
||||
MinCount: aws.Int64(1),
|
||||
IamInstanceProfile: &ec2.IamInstanceProfileSpecification{Name: &s.IamInstanceProfile},
|
||||
BlockDeviceMappings: s.BlockDevices.BuildLaunchDevices(),
|
||||
Placement: &ec2.Placement{AvailabilityZone: &s.AvailabilityZone},
|
||||
EbsOptimized: &s.EbsOptimized,
|
||||
KeyName: &keyName,
|
||||
ImageId: &s.SourceAMI,
|
||||
InstanceType: &s.InstanceType,
|
||||
UserData: &userData,
|
||||
MaxCount: aws.Int64(1),
|
||||
MinCount: aws.Int64(1),
|
||||
IamInstanceProfile: &ec2.IamInstanceProfileSpecification{Name: &s.IamInstanceProfile},
|
||||
BlockDeviceMappings: s.BlockDevices.BuildLaunchDevices(),
|
||||
Placement: &ec2.Placement{AvailabilityZone: &s.AvailabilityZone},
|
||||
EbsOptimized: &s.EbsOptimized,
|
||||
}
|
||||
|
||||
if s.SubnetId != "" && s.AssociatePublicIpAddress {
|
||||
|
|
|
@ -117,21 +117,21 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
|
|||
BlockDevices: b.config.BlockDevices,
|
||||
},
|
||||
&awscommon.StepRunSourceInstance{
|
||||
Debug: b.config.PackerDebug,
|
||||
ExpectedRootDevice: "ebs",
|
||||
SpotPrice: b.config.SpotPrice,
|
||||
SpotPriceProduct: b.config.SpotPriceAutoProduct,
|
||||
InstanceType: b.config.InstanceType,
|
||||
UserData: b.config.UserData,
|
||||
UserDataFile: b.config.UserDataFile,
|
||||
SourceAMI: b.config.SourceAmi,
|
||||
IamInstanceProfile: b.config.IamInstanceProfile,
|
||||
SubnetId: b.config.SubnetId,
|
||||
AssociatePublicIpAddress: b.config.AssociatePublicIpAddress,
|
||||
EbsOptimized: b.config.EbsOptimized,
|
||||
AvailabilityZone: b.config.AvailabilityZone,
|
||||
BlockDevices: b.config.BlockDevices,
|
||||
Tags: b.config.RunTags,
|
||||
Debug: b.config.PackerDebug,
|
||||
ExpectedRootDevice: "ebs",
|
||||
SpotPrice: b.config.SpotPrice,
|
||||
SpotPriceProduct: b.config.SpotPriceAutoProduct,
|
||||
InstanceType: b.config.InstanceType,
|
||||
UserData: b.config.UserData,
|
||||
UserDataFile: b.config.UserDataFile,
|
||||
SourceAMI: b.config.SourceAmi,
|
||||
IamInstanceProfile: b.config.IamInstanceProfile,
|
||||
SubnetId: b.config.SubnetId,
|
||||
AssociatePublicIpAddress: b.config.AssociatePublicIpAddress,
|
||||
EbsOptimized: b.config.EbsOptimized,
|
||||
AvailabilityZone: b.config.AvailabilityZone,
|
||||
BlockDevices: b.config.BlockDevices,
|
||||
Tags: b.config.RunTags,
|
||||
InstanceInitiatedShutdownBehavior: b.config.InstanceInitiatedShutdownBehavior,
|
||||
},
|
||||
&stepTagEBSVolumes{
|
||||
|
|
|
@ -224,6 +224,7 @@ func (b *Builder) configureStateBag(stateBag multistep.StateBag) {
|
|||
stateBag.Put(constants.AuthorizedKey, b.config.sshAuthorizedKey)
|
||||
stateBag.Put(constants.PrivateKey, b.config.sshPrivateKey)
|
||||
|
||||
stateBag.Put(constants.ArmTags, &b.config.AzureTags)
|
||||
stateBag.Put(constants.ArmComputeName, b.config.tmpComputeName)
|
||||
stateBag.Put(constants.ArmDeploymentName, b.config.tmpDeploymentName)
|
||||
stateBag.Put(constants.ArmKeyVaultName, b.config.tmpKeyVaultName)
|
||||
|
|
|
@ -4,8 +4,9 @@
|
|||
package arm
|
||||
|
||||
import (
|
||||
"github.com/mitchellh/packer/builder/azure/common/constants"
|
||||
"testing"
|
||||
|
||||
"github.com/mitchellh/packer/builder/azure/common/constants"
|
||||
)
|
||||
|
||||
func TestStateBagShouldBePopulatedExpectedValues(t *testing.T) {
|
||||
|
@ -19,6 +20,7 @@ func TestStateBagShouldBePopulatedExpectedValues(t *testing.T) {
|
|||
constants.AuthorizedKey,
|
||||
constants.PrivateKey,
|
||||
|
||||
constants.ArmTags,
|
||||
constants.ArmComputeName,
|
||||
constants.ArmDeploymentName,
|
||||
constants.ArmLocation,
|
||||
|
|
|
@ -70,8 +70,9 @@ type Config struct {
|
|||
VMSize string `mapstructure:"vm_size"`
|
||||
|
||||
// Deployment
|
||||
ResourceGroupName string `mapstructure:"resource_group_name"`
|
||||
StorageAccount string `mapstructure:"storage_account"`
|
||||
AzureTags map[string]*string `mapstructure:"azure_tags"`
|
||||
ResourceGroupName string `mapstructure:"resource_group_name"`
|
||||
StorageAccount string `mapstructure:"storage_account"`
|
||||
storageAccountBlobEndpoint string
|
||||
CloudEnvironmentName string `mapstructure:"cloud_environment_name"`
|
||||
cloudEnvironment *azure.Environment
|
||||
|
@ -222,6 +223,7 @@ func newConfig(raws ...interface{}) (*Config, []string, error) {
|
|||
errs = packer.MultiErrorAppend(errs, c.Comm.Prepare(c.ctx)...)
|
||||
|
||||
assertRequiredParametersSet(&c, errs)
|
||||
assertTagProperties(&c, errs)
|
||||
if errs != nil && len(errs.Errors) > 0 {
|
||||
return nil, nil, errs
|
||||
}
|
||||
|
@ -349,6 +351,21 @@ func provideDefaultValues(c *Config) {
|
|||
}
|
||||
}
|
||||
|
||||
func assertTagProperties(c *Config, errs *packer.MultiError) {
|
||||
if len(c.AzureTags) > 15 {
|
||||
errs = packer.MultiErrorAppend(errs, fmt.Errorf("a max of 15 tags are supported, but %d were provided", len(c.AzureTags)))
|
||||
}
|
||||
|
||||
for k, v := range c.AzureTags {
|
||||
if len(k) > 512 {
|
||||
errs = packer.MultiErrorAppend(errs, fmt.Errorf("the tag name %q exceeds (%d) the 512 character limit", k, len(k)))
|
||||
}
|
||||
if len(*v) > 256 {
|
||||
errs = packer.MultiErrorAppend(errs, fmt.Errorf("the tag name %q exceeds (%d) the 256 character limit", v, len(*v)))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func assertRequiredParametersSet(c *Config, errs *packer.MultiError) {
|
||||
/////////////////////////////////////////////
|
||||
// Authentication via OAUTH
|
||||
|
|
|
@ -4,6 +4,7 @@
|
|||
package arm
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
@ -32,16 +33,16 @@ func TestConfigShouldProvideReasonableDefaultValues(t *testing.T) {
|
|||
c, _, err := newConfig(getArmBuilderConfiguration(), getPackerConfiguration())
|
||||
|
||||
if err != nil {
|
||||
t.Errorf("Expected configuration creation to succeed, but it failed!\n")
|
||||
t.Error("Expected configuration creation to succeed, but it failed!\n")
|
||||
t.Fatalf(" errors: %s\n", err)
|
||||
}
|
||||
|
||||
if c.UserName == "" {
|
||||
t.Errorf("Expected 'UserName' to be populated, but it was empty!")
|
||||
t.Error("Expected 'UserName' to be populated, but it was empty!")
|
||||
}
|
||||
|
||||
if c.VMSize == "" {
|
||||
t.Errorf("Expected 'VMSize' to be populated, but it was empty!")
|
||||
t.Error("Expected 'VMSize' to be populated, but it was empty!")
|
||||
}
|
||||
|
||||
if c.ObjectID != "" {
|
||||
|
@ -283,7 +284,7 @@ func TestUserShouldProvideRequiredValues(t *testing.T) {
|
|||
// Ensure we can successfully create a config.
|
||||
_, _, err := newConfig(builderValues, getPackerConfiguration())
|
||||
if err != nil {
|
||||
t.Errorf("Expected configuration creation to succeed, but it failed!\n")
|
||||
t.Error("Expected configuration creation to succeed, but it failed!\n")
|
||||
t.Fatalf(" -> %+v\n", builderValues)
|
||||
}
|
||||
|
||||
|
@ -294,7 +295,7 @@ func TestUserShouldProvideRequiredValues(t *testing.T) {
|
|||
|
||||
_, _, err := newConfig(builderValues, getPackerConfiguration())
|
||||
if err == nil {
|
||||
t.Errorf("Expected configuration creation to fail, but it succeeded!\n")
|
||||
t.Error("Expected configuration creation to fail, but it succeeded!\n")
|
||||
t.Fatalf(" -> %+v\n", builderValues)
|
||||
}
|
||||
|
||||
|
@ -374,7 +375,7 @@ func TestWinRMConfigShouldSetRoundTripDecorator(t *testing.T) {
|
|||
}
|
||||
|
||||
if c.Comm.WinRMTransportDecorator == nil {
|
||||
t.Errorf("Expected WinRMTransportDecorator to be set, but it was nil")
|
||||
t.Error("Expected WinRMTransportDecorator to be set, but it was nil")
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -425,10 +426,10 @@ func TestUseDeviceLoginIsDisabledForWindows(t *testing.T) {
|
|||
}
|
||||
|
||||
if !strings.Contains(err.Error(), "client_id must be specified") {
|
||||
t.Errorf("Expected to find error for 'client_id must be specified")
|
||||
t.Error("Expected to find error for 'client_id must be specified")
|
||||
}
|
||||
if !strings.Contains(err.Error(), "client_secret must be specified") {
|
||||
t.Errorf("Expected to find error for 'client_secret must be specified")
|
||||
t.Error("Expected to find error for 'client_secret must be specified")
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -533,6 +534,145 @@ func TestConfigShouldRejectMalformedCaptureContainerName(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestConfigShouldAcceptTags(t *testing.T) {
|
||||
config := map[string]interface{}{
|
||||
"capture_name_prefix": "ignore",
|
||||
"capture_container_name": "ignore",
|
||||
"image_offer": "ignore",
|
||||
"image_publisher": "ignore",
|
||||
"image_sku": "ignore",
|
||||
"location": "ignore",
|
||||
"storage_account": "ignore",
|
||||
"resource_group_name": "ignore",
|
||||
"subscription_id": "ignore",
|
||||
"communicator": "none",
|
||||
// Does not matter for this test case, just pick one.
|
||||
"os_type": constants.Target_Linux,
|
||||
"azure_tags": map[string]string{
|
||||
"tag01": "value01",
|
||||
"tag02": "value02",
|
||||
},
|
||||
}
|
||||
|
||||
c, _, err := newConfig(config, getPackerConfiguration())
|
||||
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if len(c.AzureTags) != 2 {
|
||||
t.Fatalf("expected to find 2 tags, but got %d", len(c.AzureTags))
|
||||
}
|
||||
|
||||
if _, ok := c.AzureTags["tag01"]; !ok {
|
||||
t.Error("expected to find key=\"tag01\", but did not")
|
||||
}
|
||||
if _, ok := c.AzureTags["tag02"]; !ok {
|
||||
t.Error("expected to find key=\"tag02\", but did not")
|
||||
}
|
||||
|
||||
value := c.AzureTags["tag01"]
|
||||
if *value != "value01" {
|
||||
t.Errorf("expected AzureTags[\"tag01\"] to have value \"value01\", but got %q", value)
|
||||
}
|
||||
|
||||
value = c.AzureTags["tag02"]
|
||||
if *value != "value02" {
|
||||
t.Errorf("expected AzureTags[\"tag02\"] to have value \"value02\", but got %q", value)
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigShouldRejectTagsInExcessOf15AcceptTags(t *testing.T) {
|
||||
tooManyTags := map[string]string{}
|
||||
for i := 0; i < 16; i++ {
|
||||
tooManyTags[fmt.Sprintf("tag%.2d", i)] = "ignored"
|
||||
}
|
||||
|
||||
config := map[string]interface{}{
|
||||
"capture_name_prefix": "ignore",
|
||||
"capture_container_name": "ignore",
|
||||
"image_offer": "ignore",
|
||||
"image_publisher": "ignore",
|
||||
"image_sku": "ignore",
|
||||
"location": "ignore",
|
||||
"storage_account": "ignore",
|
||||
"resource_group_name": "ignore",
|
||||
"subscription_id": "ignore",
|
||||
"communicator": "none",
|
||||
// Does not matter for this test case, just pick one.
|
||||
"os_type": constants.Target_Linux,
|
||||
"azure_tags": tooManyTags,
|
||||
}
|
||||
|
||||
_, _, err := newConfig(config, getPackerConfiguration())
|
||||
|
||||
if err == nil {
|
||||
t.Fatal("expected config to reject based on an excessive amount of tags (> 15)")
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigShouldRejectExcessiveTagNameLength(t *testing.T) {
|
||||
nameTooLong := make([]byte, 513)
|
||||
for i := range nameTooLong {
|
||||
nameTooLong[i] = 'a'
|
||||
}
|
||||
|
||||
tags := map[string]string{}
|
||||
tags[string(nameTooLong)] = "ignored"
|
||||
|
||||
config := map[string]interface{}{
|
||||
"capture_name_prefix": "ignore",
|
||||
"capture_container_name": "ignore",
|
||||
"image_offer": "ignore",
|
||||
"image_publisher": "ignore",
|
||||
"image_sku": "ignore",
|
||||
"location": "ignore",
|
||||
"storage_account": "ignore",
|
||||
"resource_group_name": "ignore",
|
||||
"subscription_id": "ignore",
|
||||
"communicator": "none",
|
||||
// Does not matter for this test case, just pick one.
|
||||
"os_type": constants.Target_Linux,
|
||||
"azure_tags": tags,
|
||||
}
|
||||
|
||||
_, _, err := newConfig(config, getPackerConfiguration())
|
||||
if err == nil {
|
||||
t.Fatal("expected config to reject tag name based on length (> 512)")
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigShouldRejectExcessiveTagValueLength(t *testing.T) {
|
||||
valueTooLong := make([]byte, 257)
|
||||
for i := range valueTooLong {
|
||||
valueTooLong[i] = 'a'
|
||||
}
|
||||
|
||||
tags := map[string]string{}
|
||||
tags["tag01"] = string(valueTooLong)
|
||||
|
||||
config := map[string]interface{}{
|
||||
"capture_name_prefix": "ignore",
|
||||
"capture_container_name": "ignore",
|
||||
"image_offer": "ignore",
|
||||
"image_publisher": "ignore",
|
||||
"image_sku": "ignore",
|
||||
"location": "ignore",
|
||||
"storage_account": "ignore",
|
||||
"resource_group_name": "ignore",
|
||||
"subscription_id": "ignore",
|
||||
"communicator": "none",
|
||||
// Does not matter for this test case, just pick one.
|
||||
"os_type": constants.Target_Linux,
|
||||
"azure_tags": tags,
|
||||
}
|
||||
|
||||
_, _, err := newConfig(config, getPackerConfiguration())
|
||||
if err == nil {
|
||||
t.Fatal("expected config to reject tag value based on length (> 256)")
|
||||
}
|
||||
}
|
||||
|
||||
func getArmBuilderConfiguration() map[string]string {
|
||||
m := make(map[string]string)
|
||||
for _, v := range requiredConfigValues {
|
||||
|
|
|
@ -14,7 +14,7 @@ import (
|
|||
|
||||
type StepCreateResourceGroup struct {
|
||||
client *AzureClient
|
||||
create func(resourceGroupName string, location string) error
|
||||
create func(resourceGroupName string, location string, tags *map[string]*string) error
|
||||
say func(message string)
|
||||
error func(e error)
|
||||
}
|
||||
|
@ -30,9 +30,10 @@ func NewStepCreateResourceGroup(client *AzureClient, ui packer.Ui) *StepCreateRe
|
|||
return step
|
||||
}
|
||||
|
||||
func (s *StepCreateResourceGroup) createResourceGroup(resourceGroupName string, location string) error {
|
||||
func (s *StepCreateResourceGroup) createResourceGroup(resourceGroupName string, location string, tags *map[string]*string) error {
|
||||
_, err := s.client.GroupsClient.CreateOrUpdate(resourceGroupName, resources.ResourceGroup{
|
||||
Location: &location,
|
||||
Tags: tags,
|
||||
})
|
||||
|
||||
return err
|
||||
|
@ -43,11 +44,16 @@ func (s *StepCreateResourceGroup) Run(state multistep.StateBag) multistep.StepAc
|
|||
|
||||
var resourceGroupName = state.Get(constants.ArmResourceGroupName).(string)
|
||||
var location = state.Get(constants.ArmLocation).(string)
|
||||
var tags = state.Get(constants.ArmTags).(*map[string]*string)
|
||||
|
||||
s.say(fmt.Sprintf(" -> ResourceGroupName : '%s'", resourceGroupName))
|
||||
s.say(fmt.Sprintf(" -> Location : '%s'", location))
|
||||
s.say(fmt.Sprintf(" -> Tags :"))
|
||||
for k, v := range *tags {
|
||||
s.say(fmt.Sprintf(" ->> %s : %s", k, *v))
|
||||
}
|
||||
|
||||
err := s.create(resourceGroupName, location)
|
||||
err := s.create(resourceGroupName, location, tags)
|
||||
if err == nil {
|
||||
state.Put(constants.ArmIsResourceGroupCreated, true)
|
||||
}
|
||||
|
|
|
@ -13,7 +13,7 @@ import (
|
|||
|
||||
func TestStepCreateResourceGroupShouldFailIfCreateFails(t *testing.T) {
|
||||
var testSubject = &StepCreateResourceGroup{
|
||||
create: func(string, string) error { return fmt.Errorf("!! Unit Test FAIL !!") },
|
||||
create: func(string, string, *map[string]*string) error { return fmt.Errorf("!! Unit Test FAIL !!") },
|
||||
say: func(message string) {},
|
||||
error: func(e error) {},
|
||||
}
|
||||
|
@ -32,7 +32,7 @@ func TestStepCreateResourceGroupShouldFailIfCreateFails(t *testing.T) {
|
|||
|
||||
func TestStepCreateResourceGroupShouldPassIfCreatePasses(t *testing.T) {
|
||||
var testSubject = &StepCreateResourceGroup{
|
||||
create: func(string, string) error { return nil },
|
||||
create: func(string, string, *map[string]*string) error { return nil },
|
||||
say: func(message string) {},
|
||||
error: func(e error) {},
|
||||
}
|
||||
|
@ -52,11 +52,13 @@ func TestStepCreateResourceGroupShouldPassIfCreatePasses(t *testing.T) {
|
|||
func TestStepCreateResourceGroupShouldTakeStepArgumentsFromStateBag(t *testing.T) {
|
||||
var actualResourceGroupName string
|
||||
var actualLocation string
|
||||
var actualTags *map[string]*string
|
||||
|
||||
var testSubject = &StepCreateResourceGroup{
|
||||
create: func(resourceGroupName string, location string) error {
|
||||
create: func(resourceGroupName string, location string, tags *map[string]*string) error {
|
||||
actualResourceGroupName = resourceGroupName
|
||||
actualLocation = location
|
||||
actualTags = tags
|
||||
return nil
|
||||
},
|
||||
say: func(message string) {},
|
||||
|
@ -70,8 +72,9 @@ func TestStepCreateResourceGroupShouldTakeStepArgumentsFromStateBag(t *testing.T
|
|||
t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result)
|
||||
}
|
||||
|
||||
var expectedLocation = stateBag.Get(constants.ArmLocation).(string)
|
||||
var expectedResourceGroupName = stateBag.Get(constants.ArmResourceGroupName).(string)
|
||||
var expectedLocation = stateBag.Get(constants.ArmLocation).(string)
|
||||
var expectedTags = stateBag.Get(constants.ArmTags).(*map[string]*string)
|
||||
|
||||
if actualResourceGroupName != expectedResourceGroupName {
|
||||
t.Fatal("Expected the step to source 'constants.ArmResourceGroupName' from the state bag, but it did not.")
|
||||
|
@ -81,6 +84,10 @@ func TestStepCreateResourceGroupShouldTakeStepArgumentsFromStateBag(t *testing.T
|
|||
t.Fatal("Expected the step to source 'constants.ArmResourceGroupName' from the state bag, but it did not.")
|
||||
}
|
||||
|
||||
if len(*expectedTags) != len(*actualTags) && *(*expectedTags)["tag01"] != *(*actualTags)["tag01"] {
|
||||
t.Fatal("Expected the step to source 'constants.ArmTags' from the state bag, but it did not.")
|
||||
}
|
||||
|
||||
_, ok := stateBag.GetOk(constants.ArmIsResourceGroupCreated)
|
||||
if !ok {
|
||||
t.Fatal("Expected the step to add item to stateBag['constants.ArmIsResourceGroupCreated'], but it did not.")
|
||||
|
@ -93,5 +100,12 @@ func createTestStateBagStepCreateResourceGroup() multistep.StateBag {
|
|||
stateBag.Put(constants.ArmLocation, "Unit Test: Location")
|
||||
stateBag.Put(constants.ArmResourceGroupName, "Unit Test: ResourceGroupName")
|
||||
|
||||
value := "Unit Test: Tags"
|
||||
tags := map[string]*string{
|
||||
"tag01": &value,
|
||||
}
|
||||
|
||||
stateBag.Put(constants.ArmTags, &tags)
|
||||
|
||||
return stateBag
|
||||
}
|
||||
|
|
|
@ -1,78 +0,0 @@
|
|||
// Copyright (c) Microsoft Corporation. All rights reserved.
|
||||
// Licensed under the MIT License. See the LICENSE file in builder/azure for license information.
|
||||
|
||||
package arm
|
||||
|
||||
// See https://github.com/Azure/azure-quickstart-templates for a extensive list of templates.
|
||||
|
||||
// Template to deploy a KeyVault.
|
||||
//
|
||||
// This template is still hard-coded unlike the ARM templates used for VMs for
|
||||
// a couple of reasons.
|
||||
//
|
||||
// 1. The SDK defines no types for a Key Vault
|
||||
// 2. The Key Vault template is relatively simple, and is static.
|
||||
//
|
||||
const KeyVault = `{
|
||||
"$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json",
|
||||
"contentVersion": "1.0.0.0",
|
||||
"parameters": {
|
||||
"keyVaultName": {
|
||||
"type": "string"
|
||||
},
|
||||
"keyVaultSecretValue": {
|
||||
"type": "securestring"
|
||||
},
|
||||
"objectId": {
|
||||
"type": "string"
|
||||
},
|
||||
"tenantId": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"variables": {
|
||||
"apiVersion": "2015-06-01",
|
||||
"location": "[resourceGroup().location]",
|
||||
"keyVaultSecretName": "packerKeyVaultSecret"
|
||||
},
|
||||
"resources": [
|
||||
{
|
||||
"apiVersion": "[variables('apiVersion')]",
|
||||
"type": "Microsoft.KeyVault/vaults",
|
||||
"name": "[parameters('keyVaultName')]",
|
||||
"location": "[variables('location')]",
|
||||
"properties": {
|
||||
"enabledForDeployment": "true",
|
||||
"enabledForTemplateDeployment": "true",
|
||||
"tenantId": "[parameters('tenantId')]",
|
||||
"accessPolicies": [
|
||||
{
|
||||
"tenantId": "[parameters('tenantId')]",
|
||||
"objectId": "[parameters('objectId')]",
|
||||
"permissions": {
|
||||
"keys": [ "all" ],
|
||||
"secrets": [ "all" ]
|
||||
}
|
||||
}
|
||||
],
|
||||
"sku": {
|
||||
"name": "standard",
|
||||
"family": "A"
|
||||
}
|
||||
},
|
||||
"resources": [
|
||||
{
|
||||
"apiVersion": "[variables('apiVersion')]",
|
||||
"type": "secrets",
|
||||
"name": "[variables('keyVaultSecretName')]",
|
||||
"dependsOn": [
|
||||
"[concat('Microsoft.KeyVault/vaults/', parameters('keyVaultName'))]"
|
||||
],
|
||||
"properties": {
|
||||
"value": "[parameters('keyVaultSecretValue')]"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}`
|
|
@ -20,7 +20,11 @@ func GetKeyVaultDeployment(config *Config) (*resources.Deployment, error) {
|
|||
TenantId: &template.TemplateParameter{Value: config.TenantID},
|
||||
}
|
||||
|
||||
return createDeploymentParameters(KeyVault, params)
|
||||
builder, _ := template.NewTemplateBuilder(template.KeyVault)
|
||||
builder.SetTags(&config.AzureTags)
|
||||
|
||||
doc, _ := builder.ToJSON()
|
||||
return createDeploymentParameters(*doc, params)
|
||||
}
|
||||
|
||||
func GetVirtualMachineDeployment(config *Config) (*resources.Deployment, error) {
|
||||
|
@ -34,7 +38,7 @@ func GetVirtualMachineDeployment(config *Config) (*resources.Deployment, error)
|
|||
VMName: &template.TemplateParameter{Value: config.tmpComputeName},
|
||||
}
|
||||
|
||||
builder, _ := template.NewTemplateBuilder()
|
||||
builder, _ := template.NewTemplateBuilder(template.BasicTemplate)
|
||||
osType := compute.Linux
|
||||
|
||||
switch config.OSType {
|
||||
|
@ -58,6 +62,7 @@ func GetVirtualMachineDeployment(config *Config) (*resources.Deployment, error)
|
|||
config.VirtualNetworkSubnetName)
|
||||
}
|
||||
|
||||
builder.SetTags(&config.AzureTags)
|
||||
doc, _ := builder.ToJSON()
|
||||
return createDeploymentParameters(*doc, params)
|
||||
}
|
||||
|
|
|
@ -56,6 +56,11 @@
|
|||
"type": "secrets"
|
||||
}
|
||||
],
|
||||
"tags": {
|
||||
"tag01": "value01",
|
||||
"tag02": "value02",
|
||||
"tag03": "value03"
|
||||
},
|
||||
"type": "Microsoft.KeyVault/vaults"
|
||||
}
|
||||
],
|
||||
|
|
|
@ -0,0 +1,179 @@
|
|||
{
|
||||
"$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json",
|
||||
"contentVersion": "1.0.0.0",
|
||||
"parameters": {
|
||||
"adminPassword": {
|
||||
"type": "string"
|
||||
},
|
||||
"adminUsername": {
|
||||
"type": "string"
|
||||
},
|
||||
"dnsNameForPublicIP": {
|
||||
"type": "string"
|
||||
},
|
||||
"osDiskName": {
|
||||
"type": "string"
|
||||
},
|
||||
"storageAccountBlobEndpoint": {
|
||||
"type": "string"
|
||||
},
|
||||
"vmName": {
|
||||
"type": "string"
|
||||
},
|
||||
"vmSize": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"resources": [
|
||||
{
|
||||
"apiVersion": "[variables('apiVersion')]",
|
||||
"location": "[variables('location')]",
|
||||
"name": "[variables('publicIPAddressName')]",
|
||||
"properties": {
|
||||
"dnsSettings": {
|
||||
"domainNameLabel": "[parameters('dnsNameForPublicIP')]"
|
||||
},
|
||||
"publicIPAllocationMethod": "[variables('publicIPAddressType')]"
|
||||
},
|
||||
"tags": {
|
||||
"tag01": "value01",
|
||||
"tag02": "value02",
|
||||
"tag03": "value03"
|
||||
},
|
||||
"type": "Microsoft.Network/publicIPAddresses"
|
||||
},
|
||||
{
|
||||
"apiVersion": "[variables('apiVersion')]",
|
||||
"location": "[variables('location')]",
|
||||
"name": "[variables('virtualNetworkName')]",
|
||||
"properties": {
|
||||
"addressSpace": {
|
||||
"addressPrefixes": [
|
||||
"[variables('addressPrefix')]"
|
||||
]
|
||||
},
|
||||
"subnets": [
|
||||
{
|
||||
"name": "[variables('subnetName')]",
|
||||
"properties": {
|
||||
"addressPrefix": "[variables('subnetAddressPrefix')]"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"tags": {
|
||||
"tag01": "value01",
|
||||
"tag02": "value02",
|
||||
"tag03": "value03"
|
||||
},
|
||||
"type": "Microsoft.Network/virtualNetworks"
|
||||
},
|
||||
{
|
||||
"apiVersion": "[variables('apiVersion')]",
|
||||
"dependsOn": [
|
||||
"[concat('Microsoft.Network/publicIPAddresses/', variables('publicIPAddressName'))]",
|
||||
"[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]"
|
||||
],
|
||||
"location": "[variables('location')]",
|
||||
"name": "[variables('nicName')]",
|
||||
"properties": {
|
||||
"ipConfigurations": [
|
||||
{
|
||||
"name": "ipconfig",
|
||||
"properties": {
|
||||
"privateIPAllocationMethod": "Dynamic",
|
||||
"publicIPAddress": {
|
||||
"id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('publicIPAddressName'))]"
|
||||
},
|
||||
"subnet": {
|
||||
"id": "[variables('subnetRef')]"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"tags": {
|
||||
"tag01": "value01",
|
||||
"tag02": "value02",
|
||||
"tag03": "value03"
|
||||
},
|
||||
"type": "Microsoft.Network/networkInterfaces"
|
||||
},
|
||||
{
|
||||
"apiVersion": "[variables('apiVersion')]",
|
||||
"dependsOn": [
|
||||
"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]"
|
||||
],
|
||||
"location": "[variables('location')]",
|
||||
"name": "[parameters('vmName')]",
|
||||
"properties": {
|
||||
"diagnosticsProfile": {
|
||||
"bootDiagnostics": {
|
||||
"enabled": false
|
||||
}
|
||||
},
|
||||
"hardwareProfile": {
|
||||
"vmSize": "[parameters('vmSize')]"
|
||||
},
|
||||
"networkProfile": {
|
||||
"networkInterfaces": [
|
||||
{
|
||||
"id": "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]"
|
||||
}
|
||||
]
|
||||
},
|
||||
"osProfile": {
|
||||
"adminPassword": "[parameters('adminPassword')]",
|
||||
"adminUsername": "[parameters('adminUsername')]",
|
||||
"computerName": "[parameters('vmName')]",
|
||||
"linuxConfiguration": {
|
||||
"ssh": {
|
||||
"publicKeys": [
|
||||
{
|
||||
"keyData": "",
|
||||
"path": "[variables('sshKeyPath')]"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"storageProfile": {
|
||||
"osDisk": {
|
||||
"caching": "ReadWrite",
|
||||
"createOption": "FromImage",
|
||||
"image": {
|
||||
"uri": "https://localhost/custom.vhd"
|
||||
},
|
||||
"name": "osdisk",
|
||||
"osType": "Linux",
|
||||
"vhd": {
|
||||
"uri": "[concat(parameters('storageAccountBlobEndpoint'),variables('vmStorageAccountContainerName'),'/', parameters('osDiskName'),'.vhd')]"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"tags": {
|
||||
"tag01": "value01",
|
||||
"tag02": "value02",
|
||||
"tag03": "value03"
|
||||
},
|
||||
"type": "Microsoft.Compute/virtualMachines"
|
||||
}
|
||||
],
|
||||
"variables": {
|
||||
"addressPrefix": "10.0.0.0/16",
|
||||
"apiVersion": "2015-06-15",
|
||||
"location": "[resourceGroup().location]",
|
||||
"nicName": "packerNic",
|
||||
"publicIPAddressName": "packerPublicIP",
|
||||
"publicIPAddressType": "Dynamic",
|
||||
"sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]",
|
||||
"subnetAddressPrefix": "10.0.0.0/24",
|
||||
"subnetName": "packerSubnet",
|
||||
"subnetRef": "[concat(variables('vnetID'),'/subnets/',variables('subnetName'))]",
|
||||
"virtualNetworkName": "packerNetwork",
|
||||
"virtualNetworkResourceGroup": "[resourceGroup().name]",
|
||||
"vmStorageAccountContainerName": "images",
|
||||
"vnetID": "[resourceId(variables('virtualNetworkResourceGroup'), 'Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]"
|
||||
}
|
||||
}
|
|
@ -23,19 +23,19 @@ func TestVirtualMachineDeployment00(t *testing.T) {
|
|||
}
|
||||
|
||||
if deployment.Properties.ParametersLink != nil {
|
||||
t.Errorf("Expected the ParametersLink to be nil!")
|
||||
t.Error("Expected the ParametersLink to be nil!")
|
||||
}
|
||||
|
||||
if deployment.Properties.TemplateLink != nil {
|
||||
t.Errorf("Expected the TemplateLink to be nil!")
|
||||
t.Error("Expected the TemplateLink to be nil!")
|
||||
}
|
||||
|
||||
if deployment.Properties.Parameters == nil {
|
||||
t.Errorf("Expected the Parameters to not be nil!")
|
||||
t.Error("Expected the Parameters to not be nil!")
|
||||
}
|
||||
|
||||
if deployment.Properties.Template == nil {
|
||||
t.Errorf("Expected the Template to not be nil!")
|
||||
t.Error("Expected the Template to not be nil!")
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -177,6 +177,41 @@ func TestVirtualMachineDeployment05(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
// Verify that tags are properly applied to every resource
|
||||
func TestVirtualMachineDeployment06(t *testing.T) {
|
||||
config := map[string]interface{}{
|
||||
"capture_name_prefix": "ignore",
|
||||
"capture_container_name": "ignore",
|
||||
"location": "ignore",
|
||||
"image_url": "https://localhost/custom.vhd",
|
||||
"resource_group_name": "ignore",
|
||||
"storage_account": "ignore",
|
||||
"subscription_id": "ignore",
|
||||
"os_type": constants.Target_Linux,
|
||||
"communicator": "none",
|
||||
"azure_tags": map[string]string{
|
||||
"tag01": "value01",
|
||||
"tag02": "value02",
|
||||
"tag03": "value03",
|
||||
},
|
||||
}
|
||||
|
||||
c, _, err := newConfig(config, getPackerConfiguration())
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
deployment, err := GetVirtualMachineDeployment(c)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
err = approvaltests.VerifyJSONStruct(t, deployment.Properties.Template)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure the link values are not set, and the concrete values are set.
|
||||
func TestKeyVaultDeployment00(t *testing.T) {
|
||||
c, _, _ := newConfig(getArmBuilderConfiguration(), getPackerConfiguration())
|
||||
|
@ -190,19 +225,19 @@ func TestKeyVaultDeployment00(t *testing.T) {
|
|||
}
|
||||
|
||||
if deployment.Properties.ParametersLink != nil {
|
||||
t.Errorf("Expected the ParametersLink to be nil!")
|
||||
t.Error("Expected the ParametersLink to be nil!")
|
||||
}
|
||||
|
||||
if deployment.Properties.TemplateLink != nil {
|
||||
t.Errorf("Expected the TemplateLink to be nil!")
|
||||
t.Error("Expected the TemplateLink to be nil!")
|
||||
}
|
||||
|
||||
if deployment.Properties.Parameters == nil {
|
||||
t.Errorf("Expected the Parameters to not be nil!")
|
||||
t.Error("Expected the Parameters to not be nil!")
|
||||
}
|
||||
|
||||
if deployment.Properties.Template == nil {
|
||||
t.Errorf("Expected the Template to not be nil!")
|
||||
t.Error("Expected the Template to not be nil!")
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -254,9 +289,17 @@ func TestKeyVaultDeployment02(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
// Ensure the KeyVault template is correct.
|
||||
// Ensure the KeyVault template is correct when tags are supplied.
|
||||
func TestKeyVaultDeployment03(t *testing.T) {
|
||||
c, _, _ := newConfig(getArmBuilderConfigurationWithWindows(), getPackerConfiguration())
|
||||
tags := map[string]interface{}{
|
||||
"azure_tags": map[string]string{
|
||||
"tag01": "value01",
|
||||
"tag02": "value02",
|
||||
"tag03": "value03",
|
||||
},
|
||||
}
|
||||
|
||||
c, _, _ := newConfig(tags, getArmBuilderConfigurationWithWindows(), getPackerConfiguration())
|
||||
deployment, err := GetKeyVaultDeployment(c)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
|
|
|
@ -26,5 +26,6 @@ const (
|
|||
ArmResourceGroupName string = "arm.ResourceGroupName"
|
||||
ArmIsResourceGroupCreated string = "arm.IsResourceGroupCreated"
|
||||
ArmStorageAccountName string = "arm.StorageAccountName"
|
||||
ArmTags string = "arm.Tags"
|
||||
ArmVirtualMachineCaptureParameters string = "arm.VirtualMachineCaptureParameters"
|
||||
)
|
||||
|
|
|
@ -3,7 +3,6 @@ package template
|
|||
import (
|
||||
"github.com/Azure/azure-sdk-for-go/arm/compute"
|
||||
"github.com/Azure/azure-sdk-for-go/arm/network"
|
||||
//"github.com/Azure/azure-sdk-for-go/arm/resources/resources"
|
||||
)
|
||||
|
||||
/////////////////////////////////////////////////
|
||||
|
@ -26,25 +25,49 @@ type Parameters struct {
|
|||
/////////////////////////////////////////////////
|
||||
// Template > Resource
|
||||
type Resource struct {
|
||||
ApiVersion *string `json:"apiVersion"`
|
||||
Name *string `json:"name"`
|
||||
Type *string `json:"type"`
|
||||
Location *string `json:"location"`
|
||||
DependsOn *[]string `json:"dependsOn,omitempty"`
|
||||
Properties *Properties `json:"properties,omitempty"`
|
||||
ApiVersion *string `json:"apiVersion"`
|
||||
Name *string `json:"name"`
|
||||
Type *string `json:"type"`
|
||||
Location *string `json:"location,omitempty"`
|
||||
DependsOn *[]string `json:"dependsOn,omitempty"`
|
||||
Properties *Properties `json:"properties,omitempty"`
|
||||
Tags *map[string]*string `json:"tags,omitempty"`
|
||||
Resources *[]Resource `json:"resources,omitempty"`
|
||||
}
|
||||
|
||||
/////////////////////////////////////////////////
|
||||
// Template > Resource > Properties
|
||||
type Properties struct {
|
||||
AddressSpace *network.AddressSpace `json:"addressSpace,omitempty"`
|
||||
DiagnosticsProfile *compute.DiagnosticsProfile `json:"diagnosticsProfile,omitempty"`
|
||||
DNSSettings *network.PublicIPAddressDNSSettings `json:"dnsSettings,omitempty"`
|
||||
HardwareProfile *compute.HardwareProfile `json:"hardwareProfile,omitempty"`
|
||||
IPConfigurations *[]network.IPConfiguration `json:"ipConfigurations,omitempty"`
|
||||
NetworkProfile *compute.NetworkProfile `json:"networkProfile,omitempty"`
|
||||
OsProfile *compute.OSProfile `json:"osProfile,omitempty"`
|
||||
PublicIPAllocatedMethod *network.IPAllocationMethod `json:"publicIPAllocationMethod,omitempty"`
|
||||
StorageProfile *compute.StorageProfile `json:"storageProfile,omitempty"`
|
||||
Subnets *[]network.Subnet `json:"subnets,omitempty"`
|
||||
AccessPolicies *[]AccessPolicies `json:"accessPolicies,omitempty"`
|
||||
AddressSpace *network.AddressSpace `json:"addressSpace,omitempty"`
|
||||
DiagnosticsProfile *compute.DiagnosticsProfile `json:"diagnosticsProfile,omitempty"`
|
||||
DNSSettings *network.PublicIPAddressDNSSettings `json:"dnsSettings,omitempty"`
|
||||
EnabledForDeployment *string `json:"enabledForDeployment,omitempty"`
|
||||
EnabledForTemplateDeployment *string `json:"enabledForTemplateDeployment,omitempty"`
|
||||
HardwareProfile *compute.HardwareProfile `json:"hardwareProfile,omitempty"`
|
||||
IPConfigurations *[]network.IPConfiguration `json:"ipConfigurations,omitempty"`
|
||||
NetworkProfile *compute.NetworkProfile `json:"networkProfile,omitempty"`
|
||||
OsProfile *compute.OSProfile `json:"osProfile,omitempty"`
|
||||
PublicIPAllocatedMethod *network.IPAllocationMethod `json:"publicIPAllocationMethod,omitempty"`
|
||||
Sku *Sku `json:"sku,omitempty"`
|
||||
StorageProfile *compute.StorageProfile `json:"storageProfile,omitempty"`
|
||||
Subnets *[]network.Subnet `json:"subnets,omitempty"`
|
||||
TenantId *string `json:"tenantId,omitempty"`
|
||||
Value *string `json:"value,omitempty"`
|
||||
}
|
||||
|
||||
type AccessPolicies struct {
|
||||
ObjectId *string `json:"objectId,omitempty"`
|
||||
TenantId *string `json:"tenantId,omitempty"`
|
||||
Permissions *Permissions `json:"permissions,omitempty"`
|
||||
}
|
||||
|
||||
type Permissions struct {
|
||||
Keys *[]string `json:"keys,omitempty"`
|
||||
Secrets *[]string `json:"secrets,omitempty"`
|
||||
}
|
||||
|
||||
type Sku struct {
|
||||
Family *string `json:"family,omitempty"`
|
||||
Name *string `json:"name,omitempty"`
|
||||
}
|
||||
|
|
|
@ -26,10 +26,10 @@ type TemplateBuilder struct {
|
|||
template *Template
|
||||
}
|
||||
|
||||
func NewTemplateBuilder() (*TemplateBuilder, error) {
|
||||
func NewTemplateBuilder(template string) (*TemplateBuilder, error) {
|
||||
var t Template
|
||||
|
||||
err := json.Unmarshal([]byte(basicTemplate), &t)
|
||||
err := json.Unmarshal([]byte(template), &t)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -150,6 +150,17 @@ func (s *TemplateBuilder) SetVirtualNetwork(virtualNetworkResourceGroup, virtual
|
|||
return nil
|
||||
}
|
||||
|
||||
func (s *TemplateBuilder) SetTags(tags *map[string]*string) error {
|
||||
if tags == nil || len(*tags) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
for i := range *s.template.Resources {
|
||||
(*s.template.Resources)[i].Tags = tags
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *TemplateBuilder) ToJSON() (*string, error) {
|
||||
bs, err := json.MarshalIndent(s.template, jsonPrefix, jsonIndent)
|
||||
|
||||
|
@ -210,7 +221,81 @@ func (s *TemplateBuilder) deleteResourceDependency(resource *Resource, predicate
|
|||
*resource.DependsOn = deps
|
||||
}
|
||||
|
||||
const basicTemplate = `{
|
||||
// See https://github.com/Azure/azure-quickstart-templates for a extensive list of templates.
|
||||
|
||||
// Template to deploy a KeyVault.
|
||||
//
|
||||
// This template is still hard-coded unlike the ARM templates used for VMs for
|
||||
// a couple of reasons.
|
||||
//
|
||||
// 1. The SDK defines no types for a Key Vault
|
||||
// 2. The Key Vault template is relatively simple, and is static.
|
||||
//
|
||||
const KeyVault = `{
|
||||
"$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json",
|
||||
"contentVersion": "1.0.0.0",
|
||||
"parameters": {
|
||||
"keyVaultName": {
|
||||
"type": "string"
|
||||
},
|
||||
"keyVaultSecretValue": {
|
||||
"type": "securestring"
|
||||
},
|
||||
"objectId": {
|
||||
"type": "string"
|
||||
},
|
||||
"tenantId": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"variables": {
|
||||
"apiVersion": "2015-06-01",
|
||||
"location": "[resourceGroup().location]",
|
||||
"keyVaultSecretName": "packerKeyVaultSecret"
|
||||
},
|
||||
"resources": [
|
||||
{
|
||||
"apiVersion": "[variables('apiVersion')]",
|
||||
"type": "Microsoft.KeyVault/vaults",
|
||||
"name": "[parameters('keyVaultName')]",
|
||||
"location": "[variables('location')]",
|
||||
"properties": {
|
||||
"enabledForDeployment": "true",
|
||||
"enabledForTemplateDeployment": "true",
|
||||
"tenantId": "[parameters('tenantId')]",
|
||||
"accessPolicies": [
|
||||
{
|
||||
"tenantId": "[parameters('tenantId')]",
|
||||
"objectId": "[parameters('objectId')]",
|
||||
"permissions": {
|
||||
"keys": [ "all" ],
|
||||
"secrets": [ "all" ]
|
||||
}
|
||||
}
|
||||
],
|
||||
"sku": {
|
||||
"name": "standard",
|
||||
"family": "A"
|
||||
}
|
||||
},
|
||||
"resources": [
|
||||
{
|
||||
"apiVersion": "[variables('apiVersion')]",
|
||||
"type": "secrets",
|
||||
"name": "[variables('keyVaultSecretName')]",
|
||||
"dependsOn": [
|
||||
"[concat('Microsoft.KeyVault/vaults/', parameters('keyVaultName'))]"
|
||||
],
|
||||
"properties": {
|
||||
"value": "[parameters('keyVaultSecretValue')]"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}`
|
||||
|
||||
const BasicTemplate = `{
|
||||
"$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json",
|
||||
"contentVersion": "1.0.0.0",
|
||||
"parameters": {
|
||||
|
|
|
@ -10,7 +10,7 @@ import (
|
|||
// Ensure that a Linux template is configured as expected.
|
||||
// * Include SSH configuration: authorized key, and key path.
|
||||
func TestBuildLinux00(t *testing.T) {
|
||||
testSubject, err := NewTemplateBuilder()
|
||||
testSubject, err := NewTemplateBuilder(BasicTemplate)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
@ -38,7 +38,7 @@ func TestBuildLinux00(t *testing.T) {
|
|||
|
||||
// Ensure that a user can specify a custom VHD when building a Linux template.
|
||||
func TestBuildLinux01(t *testing.T) {
|
||||
testSubject, err := NewTemplateBuilder()
|
||||
testSubject, err := NewTemplateBuilder(BasicTemplate)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
@ -66,7 +66,7 @@ func TestBuildLinux01(t *testing.T) {
|
|||
|
||||
// Ensure that a user can specify an existing Virtual Network
|
||||
func TestBuildLinux02(t *testing.T) {
|
||||
testSubject, err := NewTemplateBuilder()
|
||||
testSubject, err := NewTemplateBuilder(BasicTemplate)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
@ -94,7 +94,7 @@ func TestBuildLinux02(t *testing.T) {
|
|||
// * Include WinRM configuration.
|
||||
// * Include KeyVault configuration, which is needed for WinRM.
|
||||
func TestBuildWindows00(t *testing.T) {
|
||||
testSubject, err := NewTemplateBuilder()
|
||||
testSubject, err := NewTemplateBuilder(BasicTemplate)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
|
|
@ -3,7 +3,6 @@ package digitalocean
|
|||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"time"
|
||||
|
||||
"github.com/digitalocean/godo"
|
||||
"github.com/mitchellh/multistep"
|
||||
|
@ -50,7 +49,7 @@ func (s *stepPowerOff) Run(state multistep.StateBag) multistep.StepAction {
|
|||
}
|
||||
|
||||
// Wait for the droplet to become unlocked for future steps
|
||||
if err := waitForDropletUnlocked(client, dropletId, 4*time.Minute); err != nil {
|
||||
if err := waitForDropletUnlocked(client, dropletId, c.StateTimeout); err != nil {
|
||||
// If we get an error the first time, actually report it
|
||||
err := fmt.Errorf("Error powering off droplet: %s", err)
|
||||
state.Put("error", err)
|
||||
|
|
|
@ -14,6 +14,7 @@ type stepShutdown struct{}
|
|||
|
||||
func (s *stepShutdown) Run(state multistep.StateBag) multistep.StepAction {
|
||||
client := state.Get("client").(*godo.Client)
|
||||
c := state.Get("config").(Config)
|
||||
ui := state.Get("ui").(packer.Ui)
|
||||
dropletId := state.Get("droplet_id").(int)
|
||||
|
||||
|
@ -63,7 +64,7 @@ func (s *stepShutdown) Run(state multistep.StateBag) multistep.StepAction {
|
|||
}
|
||||
}()
|
||||
|
||||
err = waitForDropletState("off", dropletId, client, 2*time.Minute)
|
||||
err = waitForDropletState("off", dropletId, client, c.StateTimeout)
|
||||
if err != nil {
|
||||
// If we get an error the first time, actually report it
|
||||
err := fmt.Errorf("Error shutting down droplet: %s", err)
|
||||
|
@ -72,7 +73,7 @@ func (s *stepShutdown) Run(state multistep.StateBag) multistep.StepAction {
|
|||
return multistep.ActionHalt
|
||||
}
|
||||
|
||||
if err := waitForDropletUnlocked(client, dropletId, 4*time.Minute); err != nil {
|
||||
if err := waitForDropletUnlocked(client, dropletId, c.StateTimeout); err != nil {
|
||||
// If we get an error the first time, actually report it
|
||||
err := fmt.Errorf("Error shutting down droplet: %s", err)
|
||||
state.Put("error", err)
|
||||
|
|
|
@ -9,7 +9,7 @@ import (
|
|||
)
|
||||
|
||||
// accountFile represents the structure of the account file JSON file.
|
||||
type accountFile struct {
|
||||
type AccountFile struct {
|
||||
PrivateKeyId string `json:"private_key_id"`
|
||||
PrivateKey string `json:"private_key"`
|
||||
ClientEmail string `json:"client_email"`
|
||||
|
@ -22,7 +22,7 @@ func parseJSON(result interface{}, text string) error {
|
|||
return dec.Decode(result)
|
||||
}
|
||||
|
||||
func processAccountFile(account_file *accountFile, text string) error {
|
||||
func ProcessAccountFile(account_file *AccountFile, text string) error {
|
||||
// Assume text is a JSON string
|
||||
if err := parseJSON(account_file, text); err != nil {
|
||||
// If text was not JSON, assume it is a file path instead
|
||||
|
|
|
@ -7,8 +7,9 @@ import (
|
|||
|
||||
// Artifact represents a GCE image as the result of a Packer build.
|
||||
type Artifact struct {
|
||||
image Image
|
||||
driver Driver
|
||||
image Image
|
||||
driver Driver
|
||||
config *Config
|
||||
}
|
||||
|
||||
// BuilderId returns the builder Id.
|
||||
|
@ -39,5 +40,17 @@ func (a *Artifact) String() string {
|
|||
}
|
||||
|
||||
func (a *Artifact) State(name string) interface{} {
|
||||
switch name {
|
||||
case "ImageName":
|
||||
return a.image.Name
|
||||
case "ImageSizeGb":
|
||||
return a.image.SizeGb
|
||||
case "AccountFilePath":
|
||||
return a.config.AccountFile
|
||||
case "ProjectId":
|
||||
return a.config.ProjectId
|
||||
case "BuildZone":
|
||||
return a.config.Zone
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -36,7 +36,7 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) {
|
|||
// representing a GCE machine image.
|
||||
func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packer.Artifact, error) {
|
||||
driver, err := NewDriverGCE(
|
||||
ui, b.config.ProjectId, &b.config.account)
|
||||
ui, b.config.ProjectId, &b.config.Account)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -95,6 +95,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
|
|||
artifact := &Artifact{
|
||||
image: state.Get("image").(Image),
|
||||
driver: driver,
|
||||
config: b.config,
|
||||
}
|
||||
return artifact, nil
|
||||
}
|
||||
|
|
|
@ -37,6 +37,7 @@ type Config struct {
|
|||
MachineType string `mapstructure:"machine_type"`
|
||||
Metadata map[string]string `mapstructure:"metadata"`
|
||||
Network string `mapstructure:"network"`
|
||||
OmitExternalIP bool `mapstructure:"omit_external_ip"`
|
||||
Preemptible bool `mapstructure:"preemptible"`
|
||||
RawStateTimeout string `mapstructure:"state_timeout"`
|
||||
Region string `mapstructure:"region"`
|
||||
|
@ -48,7 +49,7 @@ type Config struct {
|
|||
UseInternalIP bool `mapstructure:"use_internal_ip"`
|
||||
Zone string `mapstructure:"zone"`
|
||||
|
||||
account accountFile
|
||||
Account AccountFile
|
||||
privateKeyBytes []byte
|
||||
stateTimeout time.Duration
|
||||
ctx interpolate.Context
|
||||
|
@ -156,19 +157,25 @@ func NewConfig(raws ...interface{}) (*Config, []string, error) {
|
|||
c.Region = region
|
||||
}
|
||||
|
||||
stateTimeout, err := time.ParseDuration(c.RawStateTimeout)
|
||||
err = c.CalcTimeout()
|
||||
if err != nil {
|
||||
errs = packer.MultiErrorAppend(
|
||||
errs, fmt.Errorf("Failed parsing state_timeout: %s", err))
|
||||
errs = packer.MultiErrorAppend(errs, err)
|
||||
}
|
||||
c.stateTimeout = stateTimeout
|
||||
|
||||
if c.AccountFile != "" {
|
||||
if err := processAccountFile(&c.account, c.AccountFile); err != nil {
|
||||
if err := ProcessAccountFile(&c.Account, c.AccountFile); err != nil {
|
||||
errs = packer.MultiErrorAppend(errs, err)
|
||||
}
|
||||
}
|
||||
|
||||
if c.OmitExternalIP && c.Address != "" {
|
||||
errs = packer.MultiErrorAppend(fmt.Errorf("you can not specify an external address when 'omit_external_ip' is true"))
|
||||
}
|
||||
|
||||
if c.OmitExternalIP && !c.UseInternalIP {
|
||||
errs = packer.MultiErrorAppend(fmt.Errorf("'use_internal_ip' must be true if 'omit_external_ip' is true"))
|
||||
}
|
||||
|
||||
// Check for any errors.
|
||||
if errs != nil && len(errs.Errors) > 0 {
|
||||
return nil, nil, errs
|
||||
|
@ -176,3 +183,12 @@ func NewConfig(raws ...interface{}) (*Config, []string, error) {
|
|||
|
||||
return c, nil, nil
|
||||
}
|
||||
|
||||
func (c *Config) CalcTimeout() error {
|
||||
stateTimeout, err := time.ParseDuration(c.RawStateTimeout)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed parsing state_timeout: %s", err)
|
||||
}
|
||||
c.stateTimeout = stateTimeout
|
||||
return nil
|
||||
}
|
|
@ -53,6 +53,7 @@ type InstanceConfig struct {
|
|||
Metadata map[string]string
|
||||
Name string
|
||||
Network string
|
||||
OmitExternalIP bool
|
||||
Preemptible bool
|
||||
Region string
|
||||
ServiceAccountEmail string
|
||||
|
|
|
@ -26,7 +26,7 @@ type driverGCE struct {
|
|||
|
||||
var DriverScopes = []string{"https://www.googleapis.com/auth/compute", "https://www.googleapis.com/auth/devstorage.full_control"}
|
||||
|
||||
func NewDriverGCE(ui packer.Ui, p string, a *accountFile) (Driver, error) {
|
||||
func NewDriverGCE(ui packer.Ui, p string, a *AccountFile) (Driver, error) {
|
||||
var err error
|
||||
|
||||
var client *http.Client
|
||||
|
@ -50,15 +50,20 @@ func NewDriverGCE(ui packer.Ui, p string, a *accountFile) (Driver, error) {
|
|||
// your service account.
|
||||
client = conf.Client(oauth2.NoContext)
|
||||
} else {
|
||||
log.Printf("[INFO] Requesting Google token via GCE Service Role...")
|
||||
client = &http.Client{
|
||||
Transport: &oauth2.Transport{
|
||||
// Fetch from Google Compute Engine's metadata server to retrieve
|
||||
// an access token for the provided account.
|
||||
// If no account is specified, "default" is used.
|
||||
Source: google.ComputeTokenSource(""),
|
||||
},
|
||||
}
|
||||
log.Printf("[INFO] Requesting Google token via GCE API Default Client Token Source...")
|
||||
client, err = google.DefaultClient(oauth2.NoContext, DriverScopes...)
|
||||
// The DefaultClient uses the DefaultTokenSource of the google lib.
|
||||
// The DefaultTokenSource uses the "Application Default Credentials"
|
||||
// It looks for credentials in the following places, preferring the first location found:
|
||||
// 1. A JSON file whose path is specified by the
|
||||
// GOOGLE_APPLICATION_CREDENTIALS environment variable.
|
||||
// 2. A JSON file in a location known to the gcloud command-line tool.
|
||||
// On Windows, this is %APPDATA%/gcloud/application_default_credentials.json.
|
||||
// On other systems, $HOME/.config/gcloud/application_default_credentials.json.
|
||||
// 3. On Google App Engine it uses the appengine.AccessToken function.
|
||||
// 4. On Google Compute Engine and Google App Engine Managed VMs, it fetches
|
||||
// credentials from the metadata server.
|
||||
// (In this final case any provided scopes are ignored.)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
|
@ -256,21 +261,24 @@ func (d *driverGCE) RunInstance(c *InstanceConfig) (<-chan error, error) {
|
|||
subnetworkSelfLink = subnetwork.SelfLink
|
||||
}
|
||||
|
||||
// If given a regional ip, get it
|
||||
accessconfig := compute.AccessConfig{
|
||||
Name: "AccessConfig created by Packer",
|
||||
Type: "ONE_TO_ONE_NAT",
|
||||
}
|
||||
|
||||
if c.Address != "" {
|
||||
d.ui.Message(fmt.Sprintf("Looking up address: %s", c.Address))
|
||||
region_url := strings.Split(zone.Region, "/")
|
||||
region := region_url[len(region_url)-1]
|
||||
address, err := d.service.Addresses.Get(d.projectId, region, c.Address).Do()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
var accessconfig *compute.AccessConfig
|
||||
// Use external IP if OmitExternalIP isn't set
|
||||
if !c.OmitExternalIP {
|
||||
accessconfig = &compute.AccessConfig{
|
||||
Name: "AccessConfig created by Packer",
|
||||
Type: "ONE_TO_ONE_NAT",
|
||||
}
|
||||
|
||||
// If given a static IP, use it
|
||||
if c.Address != "" {
|
||||
region_url := strings.Split(zone.Region, "/")
|
||||
region := region_url[len(region_url)-1]
|
||||
address, err := d.service.Addresses.Get(d.projectId, region, c.Address).Do()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
accessconfig.NatIP = address.Address
|
||||
}
|
||||
accessconfig.NatIP = address.Address
|
||||
}
|
||||
|
||||
// Build up the metadata
|
||||
|
@ -307,11 +315,9 @@ func (d *driverGCE) RunInstance(c *InstanceConfig) (<-chan error, error) {
|
|||
Name: c.Name,
|
||||
NetworkInterfaces: []*compute.NetworkInterface{
|
||||
&compute.NetworkInterface{
|
||||
AccessConfigs: []*compute.AccessConfig{
|
||||
&accessconfig,
|
||||
},
|
||||
Network: network.SelfLink,
|
||||
Subnetwork: subnetworkSelfLink,
|
||||
AccessConfigs: []*compute.AccessConfig{accessconfig},
|
||||
Network: network.SelfLink,
|
||||
Subnetwork: subnetworkSelfLink,
|
||||
},
|
||||
},
|
||||
Scheduling: &compute.Scheduling{
|
||||
|
|
|
@ -25,7 +25,7 @@ func (config *Config) getImage() Image {
|
|||
|
||||
func (config *Config) getInstanceMetadata(sshPublicKey string) (map[string]string, error) {
|
||||
instanceMetadata := make(map[string]string)
|
||||
var err error
|
||||
var err error
|
||||
|
||||
// Copy metadata from config.
|
||||
for k, v := range config.Metadata {
|
||||
|
@ -77,9 +77,10 @@ func (s *StepCreateInstance) Run(state multistep.StateBag) multistep.StepAction
|
|||
Metadata: metadata,
|
||||
Name: name,
|
||||
Network: config.Network,
|
||||
OmitExternalIP: config.OmitExternalIP,
|
||||
Preemptible: config.Preemptible,
|
||||
Region: config.Region,
|
||||
ServiceAccountEmail: config.account.ClientEmail,
|
||||
ServiceAccountEmail: config.Account.ClientEmail,
|
||||
Subnetwork: config.Subnetwork,
|
||||
Tags: config.Tags,
|
||||
Zone: config.Zone,
|
||||
|
|
|
@ -71,8 +71,9 @@ func (s *StepTeardownInstance) Cleanup(state multistep.StateBag) {
|
|||
if err != nil {
|
||||
ui.Error(fmt.Sprintf(
|
||||
"Error deleting disk. Please delete it manually.\n\n"+
|
||||
"Name: %s\n"+
|
||||
"Error: %s", config.InstanceName, err))
|
||||
"DiskName: %s\n" +
|
||||
"Zone: %s\n" +
|
||||
"Error: %s", config.DiskName, config.Zone, err))
|
||||
}
|
||||
|
||||
ui.Message("Disk has been deleted!")
|
||||
|
|
|
@ -7,8 +7,10 @@ import (
|
|||
gossh "golang.org/x/crypto/ssh"
|
||||
)
|
||||
|
||||
func CommHost(state multistep.StateBag) (string, error) {
|
||||
return "127.0.0.1", nil
|
||||
func CommHost(host string) func(multistep.StateBag) (string, error) {
|
||||
return func(state multistep.StateBag) (string, error) {
|
||||
return host, nil
|
||||
}
|
||||
}
|
||||
|
||||
func SSHPort(state multistep.StateBag) (int, error) {
|
||||
|
|
|
@ -21,6 +21,10 @@ type SSHConfig struct {
|
|||
}
|
||||
|
||||
func (c *SSHConfig) Prepare(ctx *interpolate.Context) []error {
|
||||
if c.Comm.SSHHost == "" {
|
||||
c.Comm.SSHHost = "127.0.0.1"
|
||||
}
|
||||
|
||||
if c.SSHHostPortMin == 0 {
|
||||
c.SSHHostPortMin = 2222
|
||||
}
|
||||
|
|
|
@ -235,7 +235,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
|
|||
},
|
||||
&communicator.StepConnect{
|
||||
Config: &b.config.SSHConfig.Comm,
|
||||
Host: vboxcommon.CommHost,
|
||||
Host: vboxcommon.CommHost(b.config.SSHConfig.Comm.SSHHost),
|
||||
SSHConfig: vboxcommon.SSHConfigFunc(b.config.SSHConfig),
|
||||
SSHPort: vboxcommon.SSHPort,
|
||||
},
|
||||
|
|
|
@ -104,7 +104,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
|
|||
},
|
||||
&communicator.StepConnect{
|
||||
Config: &b.config.SSHConfig.Comm,
|
||||
Host: vboxcommon.CommHost,
|
||||
Host: vboxcommon.CommHost(b.config.SSHConfig.Comm.SSHHost),
|
||||
SSHConfig: vboxcommon.SSHConfigFunc(b.config.SSHConfig),
|
||||
SSHPort: vboxcommon.SSHPort,
|
||||
},
|
||||
|
|
|
@ -11,9 +11,10 @@ type RunConfig struct {
|
|||
Headless bool `mapstructure:"headless"`
|
||||
RawBootWait string `mapstructure:"boot_wait"`
|
||||
|
||||
VNCBindAddress string `mapstructure:"vnc_bind_address"`
|
||||
VNCPortMin uint `mapstructure:"vnc_port_min"`
|
||||
VNCPortMax uint `mapstructure:"vnc_port_max"`
|
||||
VNCBindAddress string `mapstructure:"vnc_bind_address"`
|
||||
VNCPortMin uint `mapstructure:"vnc_port_min"`
|
||||
VNCPortMax uint `mapstructure:"vnc_port_max"`
|
||||
VNCDisablePassword bool `mapstructure:"vnc_disable_password"`
|
||||
|
||||
BootWait time.Duration ``
|
||||
}
|
||||
|
|
|
@ -21,13 +21,17 @@ import (
|
|||
// Produces:
|
||||
// vnc_port uint - The port that VNC is configured to listen on.
|
||||
type StepConfigureVNC struct {
|
||||
VNCBindAddress string
|
||||
VNCPortMin uint
|
||||
VNCPortMax uint
|
||||
VNCBindAddress string
|
||||
VNCPortMin uint
|
||||
VNCPortMax uint
|
||||
VNCDisablePassword bool
|
||||
}
|
||||
|
||||
type VNCAddressFinder interface {
|
||||
VNCAddress(string, uint, uint) (string, uint, error)
|
||||
|
||||
// UpdateVMX, sets driver specific VNC values to VMX data.
|
||||
UpdateVMX(vncAddress, vncPassword string, vncPort uint, vmxData map[string]string)
|
||||
}
|
||||
|
||||
func (StepConfigureVNC) VNCAddress(vncBindAddress string, portMin, portMax uint) (string, uint, error) {
|
||||
|
@ -53,6 +57,24 @@ func (StepConfigureVNC) VNCAddress(vncBindAddress string, portMin, portMax uint)
|
|||
return vncBindAddress, vncPort, nil
|
||||
}
|
||||
|
||||
func VNCPassword(skipPassword bool) string {
|
||||
if skipPassword {
|
||||
return ""
|
||||
}
|
||||
length := int(8)
|
||||
|
||||
charSet := []byte("1234567890-=qwertyuiop[]asdfghjkl;zxcvbnm,./!@#%^*()_+QWERTYUIOP{}|ASDFGHJKL:XCVBNM<>?")
|
||||
charSetLength := len(charSet)
|
||||
|
||||
password := make([]byte, length)
|
||||
|
||||
for i := 0; i < length; i++ {
|
||||
password[i] = charSet[rand.Intn(charSetLength)]
|
||||
}
|
||||
|
||||
return string(password)
|
||||
}
|
||||
|
||||
func (s *StepConfigureVNC) Run(state multistep.StateBag) multistep.StepAction {
|
||||
driver := state.Get("driver").(Driver)
|
||||
ui := state.Get("ui").(packer.Ui)
|
||||
|
@ -88,12 +110,12 @@ func (s *StepConfigureVNC) Run(state multistep.StateBag) multistep.StepAction {
|
|||
return multistep.ActionHalt
|
||||
}
|
||||
|
||||
vncPassword := VNCPassword(s.VNCDisablePassword)
|
||||
|
||||
log.Printf("Found available VNC port: %d", vncPort)
|
||||
|
||||
vmxData := ParseVMX(string(vmxBytes))
|
||||
vmxData["remotedisplay.vnc.enabled"] = "TRUE"
|
||||
vmxData["remotedisplay.vnc.port"] = fmt.Sprintf("%d", vncPort)
|
||||
vmxData["remotedisplay.vnc.ip"] = fmt.Sprintf("%s", vncBindAddress)
|
||||
vncFinder.UpdateVMX(vncBindAddress, vncPassword, vncPort, vmxData)
|
||||
|
||||
if err := WriteVMX(vmxPath, vmxData); err != nil {
|
||||
err := fmt.Errorf("Error writing VMX data: %s", err)
|
||||
|
@ -104,9 +126,19 @@ func (s *StepConfigureVNC) Run(state multistep.StateBag) multistep.StepAction {
|
|||
|
||||
state.Put("vnc_port", vncPort)
|
||||
state.Put("vnc_ip", vncBindAddress)
|
||||
state.Put("vnc_password", vncPassword)
|
||||
|
||||
return multistep.ActionContinue
|
||||
}
|
||||
|
||||
func (StepConfigureVNC) UpdateVMX(address, password string, port uint, data map[string]string) {
|
||||
data["remotedisplay.vnc.enabled"] = "TRUE"
|
||||
data["remotedisplay.vnc.port"] = fmt.Sprintf("%d", port)
|
||||
data["remotedisplay.vnc.ip"] = address
|
||||
if len(password) > 0 {
|
||||
data["remotedisplay.vnc.password"] = password
|
||||
}
|
||||
}
|
||||
|
||||
func (StepConfigureVNC) Cleanup(multistep.StateBag) {
|
||||
}
|
||||
|
|
|
@ -0,0 +1,46 @@
|
|||
diff a/builder/vmware/common/step_configure_vnc.go b/builder/vmware/common/step_configure_vnc.go (rejected hunks)
|
||||
@@ -52,6 +52,21 @@ func (StepConfigureVNC) VNCAddress(portMin, portMax uint) (string, uint, error)
|
||||
return "127.0.0.1", vncPort, nil
|
||||
}
|
||||
|
||||
+func VNCPassword() (string) {
|
||||
+ length := int(8)
|
||||
+
|
||||
+ charSet := []byte("1234567890-=qwertyuiop[]asdfghjkl;zxcvbnm,./!@#%^*()_+QWERTYUIOP{}|ASDFGHJKL:XCVBNM<>?")
|
||||
+ charSetLength := len(charSet)
|
||||
+
|
||||
+ password := make([]byte, length)
|
||||
+
|
||||
+ for i := 0; i < length; i++ {
|
||||
+ password[i] = charSet[ rand.Intn(charSetLength) ]
|
||||
+ }
|
||||
+
|
||||
+ return string(password)
|
||||
+}
|
||||
+
|
||||
func (s *StepConfigureVNC) Run(state multistep.StateBag) multistep.StepAction {
|
||||
driver := state.Get("driver").(Driver)
|
||||
ui := state.Get("ui").(packer.Ui)
|
||||
@@ -86,12 +101,14 @@ func (s *StepConfigureVNC) Run(state multistep.StateBag) multistep.StepAction {
|
||||
ui.Error(err.Error())
|
||||
return multistep.ActionHalt
|
||||
}
|
||||
+ vncPassword := VNCPassword()
|
||||
|
||||
log.Printf("Found available VNC port: %d", vncPort)
|
||||
|
||||
vmxData := ParseVMX(string(vmxBytes))
|
||||
vmxData["remotedisplay.vnc.enabled"] = "TRUE"
|
||||
vmxData["remotedisplay.vnc.port"] = fmt.Sprintf("%d", vncPort)
|
||||
+ vmxData["remotedisplay.vnc.password"] = vncPassword
|
||||
|
||||
if err := WriteVMX(vmxPath, vmxData); err != nil {
|
||||
err := fmt.Errorf("Error writing VMX data: %s", err)
|
||||
@@ -102,6 +119,7 @@ func (s *StepConfigureVNC) Run(state multistep.StateBag) multistep.StepAction {
|
||||
|
||||
state.Put("vnc_port", vncPort)
|
||||
state.Put("vnc_ip", vncIp)
|
||||
+ state.Put("vnc_password", vncPassword)
|
||||
|
||||
return multistep.ActionContinue
|
||||
}
|
|
@ -0,0 +1,25 @@
|
|||
package common
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestStepConfigureVNC_implVNCAddressFinder(t *testing.T) {
|
||||
var _ VNCAddressFinder = new(StepConfigureVNC)
|
||||
}
|
||||
|
||||
func TestStepConfigureVNC_UpdateVMX(t *testing.T) {
|
||||
var s StepConfigureVNC
|
||||
data := make(map[string]string)
|
||||
s.UpdateVMX("0.0.0.0", "", 5900, data)
|
||||
if ip := data["remotedisplay.vnc.ip"]; ip != "0.0.0.0" {
|
||||
t.Errorf("bad VMX data for key remotedisplay.vnc.ip: %v", ip)
|
||||
}
|
||||
if enabled := data["remotedisplay.vnc.enabled"]; enabled != "TRUE" {
|
||||
t.Errorf("bad VMX data for key remotedisplay.vnc.enabled: %v", enabled)
|
||||
}
|
||||
if port := data["remotedisplay.vnc.port"]; port != fmt.Sprint(port) {
|
||||
t.Errorf("bad VMX data for key remotedisplay.vnc.port: %v", port)
|
||||
}
|
||||
}
|
|
@ -38,15 +38,17 @@ func (s *StepRun) Run(state multistep.StateBag) multistep.StepAction {
|
|||
if s.Headless {
|
||||
vncIpRaw, vncIpOk := state.GetOk("vnc_ip")
|
||||
vncPortRaw, vncPortOk := state.GetOk("vnc_port")
|
||||
vncPasswordRaw, vncPasswordOk := state.GetOk("vnc_password")
|
||||
|
||||
if vncIpOk && vncPortOk {
|
||||
if vncIpOk && vncPortOk && vncPasswordOk {
|
||||
vncIp := vncIpRaw.(string)
|
||||
vncPort := vncPortRaw.(uint)
|
||||
vncPassword := vncPasswordRaw.(string)
|
||||
|
||||
ui.Message(fmt.Sprintf(
|
||||
"The VM will be run headless, without a GUI. If you want to\n"+
|
||||
"view the screen of the VM, connect via VNC without a password to\n"+
|
||||
"%s:%d", vncIp, vncPort))
|
||||
"view the screen of the VM, connect via VNC with the password \"%s\" to\n"+
|
||||
"%s:%d", vncPassword, vncIp, vncPort))
|
||||
} else {
|
||||
ui.Message("The VM will be run headless, without a GUI, as configured.\n" +
|
||||
"If the run isn't succeeding as you expect, please enable the GUI\n" +
|
||||
|
|
|
@ -46,6 +46,7 @@ func (s *StepTypeBootCommand) Run(state multistep.StateBag) multistep.StepAction
|
|||
ui := state.Get("ui").(packer.Ui)
|
||||
vncIp := state.Get("vnc_ip").(string)
|
||||
vncPort := state.Get("vnc_port").(uint)
|
||||
vncPassword := state.Get("vnc_password")
|
||||
|
||||
var pauseFn multistep.DebugPauseFn
|
||||
if debug {
|
||||
|
@ -63,7 +64,15 @@ func (s *StepTypeBootCommand) Run(state multistep.StateBag) multistep.StepAction
|
|||
}
|
||||
defer nc.Close()
|
||||
|
||||
c, err := vnc.Client(nc, &vnc.ClientConfig{Exclusive: false})
|
||||
var auth []vnc.ClientAuth
|
||||
|
||||
if vncPassword != nil && len(vncPassword.(string)) > 0 {
|
||||
auth = []vnc.ClientAuth{&vnc.PasswordAuth{Password: vncPassword.(string)}}
|
||||
} else {
|
||||
auth = []vnc.ClientAuth{new(vnc.ClientAuthNone)}
|
||||
}
|
||||
|
||||
c, err := vnc.Client(nc, &vnc.ClientConfig{Auth: auth, Exclusive: true})
|
||||
if err != nil {
|
||||
err := fmt.Errorf("Error handshaking with VNC: %s", err)
|
||||
state.Put("error", err)
|
||||
|
|
|
@ -0,0 +1,26 @@
|
|||
diff a/builder/vmware/common/step_type_boot_command.go b/builder/vmware/common/step_type_boot_command.go (rejected hunks)
|
||||
@@ -45,6 +45,7 @@ func (s *StepTypeBootCommand) Run(state multistep.StateBag) multistep.StepAction
|
||||
ui := state.Get("ui").(packer.Ui)
|
||||
vncIp := state.Get("vnc_ip").(string)
|
||||
vncPort := state.Get("vnc_port").(uint)
|
||||
+ vncPassword := state.Get("vnc_password")
|
||||
|
||||
// Connect to VNC
|
||||
ui.Say("Connecting to VM via VNC")
|
||||
@@ -57,7 +58,15 @@ func (s *StepTypeBootCommand) Run(state multistep.StateBag) multistep.StepAction
|
||||
}
|
||||
defer nc.Close()
|
||||
|
||||
- c, err := vnc.Client(nc, &vnc.ClientConfig{Exclusive: true})
|
||||
+ var auth []vnc.ClientAuth
|
||||
+
|
||||
+ if vncPassword != nil {
|
||||
+ auth = []vnc.ClientAuth{&vnc.PasswordAuth{Password: vncPassword.(string)}}
|
||||
+ } else {
|
||||
+ auth = []vnc.ClientAuth{}
|
||||
+ }
|
||||
+
|
||||
+ c, err := vnc.Client(nc, &vnc.ClientConfig{Auth: auth, Exclusive: true})
|
||||
if err != nil {
|
||||
err := fmt.Errorf("Error handshaking with VNC: %s", err)
|
||||
state.Put("error", err)
|
|
@ -17,7 +17,7 @@ import (
|
|||
func ParseVMX(contents string) map[string]string {
|
||||
results := make(map[string]string)
|
||||
|
||||
lineRe := regexp.MustCompile(`^(.+?)\s*=\s*"(.*?)"\s*$`)
|
||||
lineRe := regexp.MustCompile(`^(.+?)\s*=\s*"?(.*?)"?\s*$`)
|
||||
|
||||
for _, line := range strings.Split(contents, "\n") {
|
||||
matches := lineRe.FindStringSubmatch(line)
|
||||
|
@ -43,9 +43,35 @@ func EncodeVMX(contents map[string]string) string {
|
|||
i++
|
||||
}
|
||||
|
||||
// a list of VMX key fragments that the value must not be quoted
|
||||
// fragments are used to cover multliples (i.e. multiple disks)
|
||||
// keys are still lowercase at this point, use lower fragments
|
||||
noQuotes := []string {
|
||||
".virtualssd",
|
||||
}
|
||||
|
||||
// a list of VMX key fragments that are case sensitive
|
||||
// fragments are used to cover multliples (i.e. multiple disks)
|
||||
caseSensitive := []string {
|
||||
".virtualSSD",
|
||||
}
|
||||
|
||||
sort.Strings(keys)
|
||||
for _, k := range keys {
|
||||
buf.WriteString(fmt.Sprintf("%s = \"%s\"\n", k, contents[k]))
|
||||
pat := "%s = \"%s\"\n"
|
||||
// items with no quotes
|
||||
for _, q := range noQuotes {
|
||||
if strings.Contains(k, q) {
|
||||
pat = "%s = %s\n"
|
||||
break;
|
||||
}
|
||||
}
|
||||
key := k
|
||||
// case sensitive key fragments
|
||||
for _, c := range caseSensitive {
|
||||
key = strings.Replace(key, strings.ToLower(c), c, 1)
|
||||
}
|
||||
buf.WriteString(fmt.Sprintf(pat, key, contents[k]))
|
||||
}
|
||||
|
||||
return buf.String()
|
||||
|
|
|
@ -6,10 +6,11 @@ func TestParseVMX(t *testing.T) {
|
|||
contents := `
|
||||
.encoding = "UTF-8"
|
||||
config.version = "8"
|
||||
scsi0:0.virtualSSD = 1
|
||||
`
|
||||
|
||||
results := ParseVMX(contents)
|
||||
if len(results) != 2 {
|
||||
if len(results) != 3 {
|
||||
t.Fatalf("not correct number of results: %d", len(results))
|
||||
}
|
||||
|
||||
|
@ -20,16 +21,22 @@ config.version = "8"
|
|||
if results["config.version"] != "8" {
|
||||
t.Errorf("invalid config.version: %s", results["config.version"])
|
||||
}
|
||||
|
||||
if results["scsi0:0.virtualssd"] != "1" {
|
||||
t.Errorf("invalid scsi0:0.virtualssd: %s", results["scsi0:0.virtualssd"])
|
||||
}
|
||||
}
|
||||
|
||||
func TestEncodeVMX(t *testing.T) {
|
||||
contents := map[string]string{
|
||||
".encoding": "UTF-8",
|
||||
"config.version": "8",
|
||||
"scsi0:0.virtualssd": "1",
|
||||
}
|
||||
|
||||
expected := `.encoding = "UTF-8"
|
||||
config.version = "8"
|
||||
scsi0:0.virtualSSD = 1
|
||||
`
|
||||
|
||||
result := EncodeVMX(contents)
|
||||
|
|
|
@ -61,6 +61,8 @@ type Config struct {
|
|||
RemotePassword string `mapstructure:"remote_password"`
|
||||
RemotePrivateKey string `mapstructure:"remote_private_key_file"`
|
||||
|
||||
CommConfig communicator.Config `mapstructure:",squash"`
|
||||
|
||||
ctx interpolate.Context
|
||||
}
|
||||
|
||||
|
@ -254,9 +256,10 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
|
|||
HTTPPortMax: b.config.HTTPPortMax,
|
||||
},
|
||||
&vmwcommon.StepConfigureVNC{
|
||||
VNCBindAddress: b.config.VNCBindAddress,
|
||||
VNCPortMin: b.config.VNCPortMin,
|
||||
VNCPortMax: b.config.VNCPortMax,
|
||||
VNCBindAddress: b.config.VNCBindAddress,
|
||||
VNCPortMin: b.config.VNCPortMin,
|
||||
VNCPortMax: b.config.VNCPortMax,
|
||||
VNCDisablePassword: b.config.VNCDisablePassword,
|
||||
},
|
||||
&StepRegister{
|
||||
Format: b.config.Format,
|
||||
|
|
|
@ -338,3 +338,61 @@ func TestBuilderPrepare_VNCPort(t *testing.T) {
|
|||
t.Fatalf("should not have error: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuilderPrepare_CommConfig(t *testing.T) {
|
||||
// Test Winrm
|
||||
{
|
||||
config := testConfig()
|
||||
config["communicator"] = "winrm"
|
||||
config["winrm_username"] = "username"
|
||||
config["winrm_password"] = "password"
|
||||
config["winrm_host"] = "1.2.3.4"
|
||||
|
||||
var b Builder
|
||||
warns, err := b.Prepare(config)
|
||||
if len(warns) > 0 {
|
||||
t.Fatalf("bad: %#v", warns)
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatalf("should not have error: %s", err)
|
||||
}
|
||||
|
||||
if b.config.CommConfig.WinRMUser != "username" {
|
||||
t.Errorf("bad winrm_username: %s", b.config.CommConfig.WinRMUser)
|
||||
}
|
||||
if b.config.CommConfig.WinRMPassword != "password" {
|
||||
t.Errorf("bad winrm_password: %s", b.config.CommConfig.WinRMPassword)
|
||||
}
|
||||
if host := b.config.CommConfig.Host(); host != "1.2.3.4" {
|
||||
t.Errorf("bad host: %s", host)
|
||||
}
|
||||
}
|
||||
|
||||
// Test SSH
|
||||
{
|
||||
config := testConfig()
|
||||
config["communicator"] = "ssh"
|
||||
config["ssh_username"] = "username"
|
||||
config["ssh_password"] = "password"
|
||||
config["ssh_host"] = "1.2.3.4"
|
||||
|
||||
var b Builder
|
||||
warns, err := b.Prepare(config)
|
||||
if len(warns) > 0 {
|
||||
t.Fatalf("bad: %#v", warns)
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatalf("should not have error: %s", err)
|
||||
}
|
||||
|
||||
if b.config.CommConfig.SSHUsername != "username" {
|
||||
t.Errorf("bad ssh_username: %s", b.config.CommConfig.SSHUsername)
|
||||
}
|
||||
if b.config.CommConfig.SSHPassword != "password" {
|
||||
t.Errorf("bad ssh_password: %s", b.config.CommConfig.SSHPassword)
|
||||
}
|
||||
if host := b.config.CommConfig.Host(); host != "1.2.3.4" {
|
||||
t.Errorf("bad host: %s", host)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -65,10 +65,8 @@ func (d *ESX5Driver) ReloadVM() error {
|
|||
|
||||
func (d *ESX5Driver) Start(vmxPathLocal string, headless bool) error {
|
||||
for i := 0; i < 20; i++ {
|
||||
err := d.sh("vim-cmd", "vmsvc/power.on", d.vmId)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
//intentionally not checking for error since poweron may fail specially after initial VM registration
|
||||
d.sh("vim-cmd", "vmsvc/power.on", d.vmId)
|
||||
time.Sleep((time.Duration(i) * time.Second) + 1)
|
||||
running, err := d.IsRunning(vmxPathLocal)
|
||||
if err != nil {
|
||||
|
@ -176,7 +174,7 @@ func (d *ESX5Driver) HostIP() (string, error) {
|
|||
return host, err
|
||||
}
|
||||
|
||||
func (d *ESX5Driver) VNCAddress(vncBindIP string, portMin, portMax uint) (string, uint, error) {
|
||||
func (d *ESX5Driver) VNCAddress(_ string, portMin, portMax uint) (string, uint, error) {
|
||||
var vncPort uint
|
||||
|
||||
//Process ports ESXi is listening on to determine which are available
|
||||
|
@ -232,6 +230,16 @@ func (d *ESX5Driver) VNCAddress(vncBindIP string, portMin, portMax uint) (string
|
|||
return d.Host, vncPort, nil
|
||||
}
|
||||
|
||||
// UpdateVMX, adds the VNC port to the VMX data.
|
||||
func (ESX5Driver) UpdateVMX(_, password string, port uint, data map[string]string) {
|
||||
// Do not set remotedisplay.vnc.ip - this breaks ESXi.
|
||||
data["remotedisplay.vnc.enabled"] = "TRUE"
|
||||
data["remotedisplay.vnc.port"] = fmt.Sprintf("%d", port)
|
||||
if len(password) > 0 {
|
||||
data["remotedisplay.vnc.password"] = password
|
||||
}
|
||||
}
|
||||
|
||||
func (d *ESX5Driver) CommHost(state multistep.StateBag) (string, error) {
|
||||
config := state.Get("config").(*Config)
|
||||
|
||||
|
@ -239,6 +247,11 @@ func (d *ESX5Driver) CommHost(state multistep.StateBag) (string, error) {
|
|||
return address.(string), nil
|
||||
}
|
||||
|
||||
if address := config.CommConfig.Host(); address != "" {
|
||||
state.Put("vm_address", address)
|
||||
return address, nil
|
||||
}
|
||||
|
||||
r, err := d.esxcli("network", "vm", "list")
|
||||
if err != nil {
|
||||
return "", err
|
||||
|
@ -258,18 +271,37 @@ func (d *ESX5Driver) CommHost(state multistep.StateBag) (string, error) {
|
|||
return "", err
|
||||
}
|
||||
|
||||
record, err = r.read()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
// Loop through interfaces
|
||||
for {
|
||||
record, err = r.read()
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
if record["IPAddress"] == "0.0.0.0" {
|
||||
return "", errors.New("VM network port found, but no IP address")
|
||||
if record["IPAddress"] == "0.0.0.0" {
|
||||
continue
|
||||
}
|
||||
// When multiple NICs are connected to the same network, choose
|
||||
// one that has a route back. This Dial should ensure that.
|
||||
conn, err := net.DialTimeout("tcp", fmt.Sprintf("%s:%d", record["IPAddress"], d.Port), 2*time.Second)
|
||||
if err != nil {
|
||||
if e, ok := err.(*net.OpError); ok {
|
||||
if e.Timeout() {
|
||||
log.Printf("Timeout connecting to %s", record["IPAddress"])
|
||||
continue
|
||||
}
|
||||
}
|
||||
} else {
|
||||
defer conn.Close()
|
||||
address := record["IPAddress"]
|
||||
state.Put("vm_address", address)
|
||||
return address, nil
|
||||
}
|
||||
}
|
||||
|
||||
address := record["IPAddress"]
|
||||
state.Put("vm_address", address)
|
||||
return address, nil
|
||||
return "", errors.New("No interface on the VM has an IP address ready")
|
||||
}
|
||||
|
||||
//-------------------------------------------------------------------
|
||||
|
|
|
@ -2,19 +2,41 @@ package iso
|
|||
|
||||
import (
|
||||
"fmt"
|
||||
vmwcommon "github.com/mitchellh/packer/builder/vmware/common"
|
||||
"net"
|
||||
"testing"
|
||||
|
||||
"github.com/mitchellh/multistep"
|
||||
vmwcommon "github.com/mitchellh/packer/builder/vmware/common"
|
||||
)
|
||||
|
||||
func TestESX5Driver_implDriver(t *testing.T) {
|
||||
var _ vmwcommon.Driver = new(ESX5Driver)
|
||||
}
|
||||
|
||||
func TestESX5Driver_UpdateVMX(t *testing.T) {
|
||||
var driver ESX5Driver
|
||||
data := make(map[string]string)
|
||||
driver.UpdateVMX("0.0.0.0", "", 5900, data)
|
||||
if _, ok := data["remotedisplay.vnc.ip"]; ok {
|
||||
// Do not add the remotedisplay.vnc.ip on ESXi
|
||||
t.Fatal("invalid VMX data key: remotedisplay.vnc.ip")
|
||||
}
|
||||
if enabled := data["remotedisplay.vnc.enabled"]; enabled != "TRUE" {
|
||||
t.Errorf("bad VMX data for key remotedisplay.vnc.enabled: %v", enabled)
|
||||
}
|
||||
if port := data["remotedisplay.vnc.port"]; port != fmt.Sprint(port) {
|
||||
t.Errorf("bad VMX data for key remotedisplay.vnc.port: %v", port)
|
||||
}
|
||||
}
|
||||
|
||||
func TestESX5Driver_implOutputDir(t *testing.T) {
|
||||
var _ vmwcommon.OutputDir = new(ESX5Driver)
|
||||
}
|
||||
|
||||
func TestESX5Driver_implVNCAddressFinder(t *testing.T) {
|
||||
var _ vmwcommon.VNCAddressFinder = new(ESX5Driver)
|
||||
}
|
||||
|
||||
func TestESX5Driver_implRemoteDriver(t *testing.T) {
|
||||
var _ RemoteDriver = new(ESX5Driver)
|
||||
}
|
||||
|
@ -33,3 +55,44 @@ func TestESX5Driver_HostIP(t *testing.T) {
|
|||
t.Error(fmt.Sprintf("Expected string, %s but got %s", expected_host, host))
|
||||
}
|
||||
}
|
||||
|
||||
func TestESX5Driver_CommHost(t *testing.T) {
|
||||
const expected_host = "127.0.0.1"
|
||||
|
||||
config := testConfig()
|
||||
config["communicator"] = "winrm"
|
||||
config["winrm_username"] = "username"
|
||||
config["winrm_password"] = "password"
|
||||
config["winrm_host"] = expected_host
|
||||
|
||||
var b Builder
|
||||
warns, err := b.Prepare(config)
|
||||
if len(warns) > 0 {
|
||||
t.Fatalf("bad: %#v", warns)
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatalf("should not have error: %s", err)
|
||||
}
|
||||
if host := b.config.CommConfig.Host(); host != expected_host {
|
||||
t.Fatalf("setup failed, bad host name: %s", host)
|
||||
}
|
||||
|
||||
state := new(multistep.BasicStateBag)
|
||||
state.Put("config", &b.config)
|
||||
|
||||
var driver ESX5Driver
|
||||
host, err := driver.CommHost(state)
|
||||
if err != nil {
|
||||
t.Fatalf("should not have error: %s", err)
|
||||
}
|
||||
if host != expected_host {
|
||||
t.Errorf("bad host name: %s", host)
|
||||
}
|
||||
address, ok := state.GetOk("vm_address")
|
||||
if !ok {
|
||||
t.Error("state not updated with vm_address")
|
||||
}
|
||||
if address.(string) != expected_host {
|
||||
t.Errorf("bad vm_address: %s", address.(string))
|
||||
}
|
||||
}
|
||||
|
|
|
@ -79,9 +79,10 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
|
|||
HTTPPortMax: b.config.HTTPPortMax,
|
||||
},
|
||||
&vmwcommon.StepConfigureVNC{
|
||||
VNCBindAddress: b.config.VNCBindAddress,
|
||||
VNCPortMin: b.config.VNCPortMin,
|
||||
VNCPortMax: b.config.VNCPortMax,
|
||||
VNCBindAddress: b.config.VNCBindAddress,
|
||||
VNCPortMin: b.config.VNCPortMin,
|
||||
VNCPortMax: b.config.VNCPortMax,
|
||||
VNCDisablePassword: b.config.VNCDisablePassword,
|
||||
},
|
||||
&vmwcommon.StepRun{
|
||||
BootWait: b.config.BootWait,
|
||||
|
|
|
@ -15,49 +15,50 @@ import (
|
|||
|
||||
amazonchrootbuilder "github.com/mitchellh/packer/builder/amazon/chroot"
|
||||
amazonebsbuilder "github.com/mitchellh/packer/builder/amazon/ebs"
|
||||
amazoninstancebuilder "github.com/mitchellh/packer/builder/amazon/instance"
|
||||
azurearmbuilder "github.com/mitchellh/packer/builder/azure/arm"
|
||||
digitaloceanbuilder "github.com/mitchellh/packer/builder/digitalocean"
|
||||
dockerbuilder "github.com/mitchellh/packer/builder/docker"
|
||||
filebuilder "github.com/mitchellh/packer/builder/file"
|
||||
googlecomputebuilder "github.com/mitchellh/packer/builder/googlecompute"
|
||||
nullbuilder "github.com/mitchellh/packer/builder/null"
|
||||
openstackbuilder "github.com/mitchellh/packer/builder/openstack"
|
||||
parallelsisobuilder "github.com/mitchellh/packer/builder/parallels/iso"
|
||||
parallelspvmbuilder "github.com/mitchellh/packer/builder/parallels/pvm"
|
||||
profitbricksbuilder "github.com/mitchellh/packer/builder/profitbricks"
|
||||
qemubuilder "github.com/mitchellh/packer/builder/qemu"
|
||||
virtualboxisobuilder "github.com/mitchellh/packer/builder/virtualbox/iso"
|
||||
virtualboxovfbuilder "github.com/mitchellh/packer/builder/virtualbox/ovf"
|
||||
vmwareisobuilder "github.com/mitchellh/packer/builder/vmware/iso"
|
||||
vmwarevmxbuilder "github.com/mitchellh/packer/builder/vmware/vmx"
|
||||
amazonimportpostprocessor "github.com/mitchellh/packer/post-processor/amazon-import"
|
||||
amazoninstancebuilder "github.com/mitchellh/packer/builder/amazon/instance"
|
||||
ansiblelocalprovisioner "github.com/mitchellh/packer/provisioner/ansible-local"
|
||||
ansibleprovisioner "github.com/mitchellh/packer/provisioner/ansible"
|
||||
artificepostprocessor "github.com/mitchellh/packer/post-processor/artifice"
|
||||
atlaspostprocessor "github.com/mitchellh/packer/post-processor/atlas"
|
||||
azurearmbuilder "github.com/mitchellh/packer/builder/azure/arm"
|
||||
checksumpostprocessor "github.com/mitchellh/packer/post-processor/checksum"
|
||||
chefclientprovisioner "github.com/mitchellh/packer/provisioner/chef-client"
|
||||
chefsoloprovisioner "github.com/mitchellh/packer/provisioner/chef-solo"
|
||||
compresspostprocessor "github.com/mitchellh/packer/post-processor/compress"
|
||||
digitaloceanbuilder "github.com/mitchellh/packer/builder/digitalocean"
|
||||
dockerbuilder "github.com/mitchellh/packer/builder/docker"
|
||||
dockerimportpostprocessor "github.com/mitchellh/packer/post-processor/docker-import"
|
||||
dockerpushpostprocessor "github.com/mitchellh/packer/post-processor/docker-push"
|
||||
dockersavepostprocessor "github.com/mitchellh/packer/post-processor/docker-save"
|
||||
dockertagpostprocessor "github.com/mitchellh/packer/post-processor/docker-tag"
|
||||
manifestpostprocessor "github.com/mitchellh/packer/post-processor/manifest"
|
||||
shelllocalpostprocessor "github.com/mitchellh/packer/post-processor/shell-local"
|
||||
vagrantpostprocessor "github.com/mitchellh/packer/post-processor/vagrant"
|
||||
vagrantcloudpostprocessor "github.com/mitchellh/packer/post-processor/vagrant-cloud"
|
||||
vspherepostprocessor "github.com/mitchellh/packer/post-processor/vsphere"
|
||||
ansibleprovisioner "github.com/mitchellh/packer/provisioner/ansible"
|
||||
ansiblelocalprovisioner "github.com/mitchellh/packer/provisioner/ansible-local"
|
||||
chefclientprovisioner "github.com/mitchellh/packer/provisioner/chef-client"
|
||||
chefsoloprovisioner "github.com/mitchellh/packer/provisioner/chef-solo"
|
||||
filebuilder "github.com/mitchellh/packer/builder/file"
|
||||
fileprovisioner "github.com/mitchellh/packer/provisioner/file"
|
||||
googlecomputebuilder "github.com/mitchellh/packer/builder/googlecompute"
|
||||
googlecomputeexportpostprocessor "github.com/mitchellh/packer/post-processor/googlecompute-export"
|
||||
manifestpostprocessor "github.com/mitchellh/packer/post-processor/manifest"
|
||||
nullbuilder "github.com/mitchellh/packer/builder/null"
|
||||
openstackbuilder "github.com/mitchellh/packer/builder/openstack"
|
||||
parallelsisobuilder "github.com/mitchellh/packer/builder/parallels/iso"
|
||||
parallelspvmbuilder "github.com/mitchellh/packer/builder/parallels/pvm"
|
||||
powershellprovisioner "github.com/mitchellh/packer/provisioner/powershell"
|
||||
puppetmasterlessprovisioner "github.com/mitchellh/packer/provisioner/puppet-masterless"
|
||||
puppetserverprovisioner "github.com/mitchellh/packer/provisioner/puppet-server"
|
||||
qemubuilder "github.com/mitchellh/packer/builder/qemu"
|
||||
saltmasterlessprovisioner "github.com/mitchellh/packer/provisioner/salt-masterless"
|
||||
shellprovisioner "github.com/mitchellh/packer/provisioner/shell"
|
||||
shelllocalpostprocessor "github.com/mitchellh/packer/post-processor/shell-local"
|
||||
shelllocalprovisioner "github.com/mitchellh/packer/provisioner/shell-local"
|
||||
shellprovisioner "github.com/mitchellh/packer/provisioner/shell"
|
||||
vagrantcloudpostprocessor "github.com/mitchellh/packer/post-processor/vagrant-cloud"
|
||||
vagrantpostprocessor "github.com/mitchellh/packer/post-processor/vagrant"
|
||||
virtualboxisobuilder "github.com/mitchellh/packer/builder/virtualbox/iso"
|
||||
virtualboxovfbuilder "github.com/mitchellh/packer/builder/virtualbox/ovf"
|
||||
vmwareisobuilder "github.com/mitchellh/packer/builder/vmware/iso"
|
||||
vmwarevmxbuilder "github.com/mitchellh/packer/builder/vmware/vmx"
|
||||
vspherepostprocessor "github.com/mitchellh/packer/post-processor/vsphere"
|
||||
windowsrestartprovisioner "github.com/mitchellh/packer/provisioner/windows-restart"
|
||||
windowsshellprovisioner "github.com/mitchellh/packer/provisioner/windows-shell"
|
||||
|
||||
)
|
||||
|
||||
type PluginCommand struct {
|
||||
|
@ -66,58 +67,61 @@ type PluginCommand struct {
|
|||
|
||||
var Builders = map[string]packer.Builder{
|
||||
"amazon-chroot": new(amazonchrootbuilder.Builder),
|
||||
"amazon-ebs": new(amazonebsbuilder.Builder),
|
||||
"amazon-instance": new(amazoninstancebuilder.Builder),
|
||||
"azure-arm": new(azurearmbuilder.Builder),
|
||||
"digitalocean": new(digitaloceanbuilder.Builder),
|
||||
"docker": new(dockerbuilder.Builder),
|
||||
"file": new(filebuilder.Builder),
|
||||
"amazon-ebs": new(amazonebsbuilder.Builder),
|
||||
"amazon-instance": new(amazoninstancebuilder.Builder),
|
||||
"azure-arm": new(azurearmbuilder.Builder),
|
||||
"digitalocean": new(digitaloceanbuilder.Builder),
|
||||
"docker": new(dockerbuilder.Builder),
|
||||
"file": new(filebuilder.Builder),
|
||||
"googlecompute": new(googlecomputebuilder.Builder),
|
||||
"null": new(nullbuilder.Builder),
|
||||
"openstack": new(openstackbuilder.Builder),
|
||||
"null": new(nullbuilder.Builder),
|
||||
"openstack": new(openstackbuilder.Builder),
|
||||
"parallels-iso": new(parallelsisobuilder.Builder),
|
||||
"parallels-pvm": new(parallelspvmbuilder.Builder),
|
||||
"profitbricks": new(profitbricksbuilder.Builder),
|
||||
"qemu": new(qemubuilder.Builder),
|
||||
"virtualbox-iso": new(virtualboxisobuilder.Builder),
|
||||
"virtualbox-ovf": new(virtualboxovfbuilder.Builder),
|
||||
"vmware-iso": new(vmwareisobuilder.Builder),
|
||||
"vmware-vmx": new(vmwarevmxbuilder.Builder),
|
||||
"qemu": new(qemubuilder.Builder),
|
||||
"virtualbox-iso": new(virtualboxisobuilder.Builder),
|
||||
"virtualbox-ovf": new(virtualboxovfbuilder.Builder),
|
||||
"vmware-iso": new(vmwareisobuilder.Builder),
|
||||
"vmware-vmx": new(vmwarevmxbuilder.Builder),
|
||||
}
|
||||
|
||||
|
||||
var Provisioners = map[string]packer.Provisioner{
|
||||
"ansible": new(ansibleprovisioner.Provisioner),
|
||||
"ansible-local": new(ansiblelocalprovisioner.Provisioner),
|
||||
"chef-client": new(chefclientprovisioner.Provisioner),
|
||||
"chef-solo": new(chefsoloprovisioner.Provisioner),
|
||||
"file": new(fileprovisioner.Provisioner),
|
||||
"powershell": new(powershellprovisioner.Provisioner),
|
||||
"puppet-masterless": new(puppetmasterlessprovisioner.Provisioner),
|
||||
"puppet-server": new(puppetserverprovisioner.Provisioner),
|
||||
"ansible": new(ansibleprovisioner.Provisioner),
|
||||
"ansible-local": new(ansiblelocalprovisioner.Provisioner),
|
||||
"chef-client": new(chefclientprovisioner.Provisioner),
|
||||
"chef-solo": new(chefsoloprovisioner.Provisioner),
|
||||
"file": new(fileprovisioner.Provisioner),
|
||||
"powershell": new(powershellprovisioner.Provisioner),
|
||||
"puppet-masterless": new(puppetmasterlessprovisioner.Provisioner),
|
||||
"puppet-server": new(puppetserverprovisioner.Provisioner),
|
||||
"salt-masterless": new(saltmasterlessprovisioner.Provisioner),
|
||||
"shell": new(shellprovisioner.Provisioner),
|
||||
"shell-local": new(shelllocalprovisioner.Provisioner),
|
||||
"shell": new(shellprovisioner.Provisioner),
|
||||
"shell-local": new(shelllocalprovisioner.Provisioner),
|
||||
"windows-restart": new(windowsrestartprovisioner.Provisioner),
|
||||
"windows-shell": new(windowsshellprovisioner.Provisioner),
|
||||
"windows-shell": new(windowsshellprovisioner.Provisioner),
|
||||
}
|
||||
|
||||
|
||||
var PostProcessors = map[string]packer.PostProcessor{
|
||||
"amazon-import": new(amazonimportpostprocessor.PostProcessor),
|
||||
"artifice": new(artificepostprocessor.PostProcessor),
|
||||
"atlas": new(atlaspostprocessor.PostProcessor),
|
||||
"checksum": new(checksumpostprocessor.PostProcessor),
|
||||
"compress": new(compresspostprocessor.PostProcessor),
|
||||
"docker-import": new(dockerimportpostprocessor.PostProcessor),
|
||||
"amazon-import": new(amazonimportpostprocessor.PostProcessor),
|
||||
"artifice": new(artificepostprocessor.PostProcessor),
|
||||
"atlas": new(atlaspostprocessor.PostProcessor),
|
||||
"checksum": new(checksumpostprocessor.PostProcessor),
|
||||
"compress": new(compresspostprocessor.PostProcessor),
|
||||
"docker-import": new(dockerimportpostprocessor.PostProcessor),
|
||||
"docker-push": new(dockerpushpostprocessor.PostProcessor),
|
||||
"docker-save": new(dockersavepostprocessor.PostProcessor),
|
||||
"docker-tag": new(dockertagpostprocessor.PostProcessor),
|
||||
"manifest": new(manifestpostprocessor.PostProcessor),
|
||||
"docker-tag": new(dockertagpostprocessor.PostProcessor),
|
||||
"googlecompute-export": new(googlecomputeexportpostprocessor.PostProcessor),
|
||||
"manifest": new(manifestpostprocessor.PostProcessor),
|
||||
"shell-local": new(shelllocalpostprocessor.PostProcessor),
|
||||
"vagrant": new(vagrantpostprocessor.PostProcessor),
|
||||
"vagrant-cloud": new(vagrantcloudpostprocessor.PostProcessor),
|
||||
"vsphere": new(vspherepostprocessor.PostProcessor),
|
||||
"vagrant": new(vagrantpostprocessor.PostProcessor),
|
||||
"vagrant-cloud": new(vagrantcloudpostprocessor.PostProcessor),
|
||||
"vsphere": new(vspherepostprocessor.PostProcessor),
|
||||
}
|
||||
|
||||
|
||||
var pluginRegexp = regexp.MustCompile("packer-(builder|post-processor|provisioner)-(.+)")
|
||||
|
||||
func (c *PluginCommand) Run(args []string) int {
|
||||
|
|
|
@ -9,6 +9,8 @@ azure_group_name=
|
|||
azure_storage_name=
|
||||
azure_subscription_id= # Derived from the account after login
|
||||
azure_tenant_id= # Derived from the account after login
|
||||
location=
|
||||
azure_object_id=
|
||||
|
||||
showhelp() {
|
||||
echo "azure-setup"
|
||||
|
@ -88,7 +90,7 @@ askSubscription() {
|
|||
|
||||
askName() {
|
||||
echo ""
|
||||
echo "Choose a name for your resource group, storage account, and client"
|
||||
echo "Choose a name for your resource group, storage account and client"
|
||||
echo "client. This is arbitrary, but it must not already be in use by"
|
||||
echo "any of those resources. ALPHANUMERIC ONLY. Ex: mypackerbuild"
|
||||
echo -n "> "
|
||||
|
@ -113,9 +115,17 @@ askSecret() {
|
|||
fi
|
||||
}
|
||||
|
||||
askLocation() {
|
||||
azure location list
|
||||
echo ""
|
||||
echo "Choose which region your resource group and storage account will be created."
|
||||
echo -n "> "
|
||||
read location
|
||||
}
|
||||
|
||||
createResourceGroup() {
|
||||
echo "==> Creating resource group"
|
||||
azure group create -n $meta_name -l westus
|
||||
azure group create -n $meta_name -l $location
|
||||
if [ $? -eq 0 ]; then
|
||||
azure_group_name=$meta_name
|
||||
else
|
||||
|
@ -126,7 +136,7 @@ createResourceGroup() {
|
|||
|
||||
createStorageAccount() {
|
||||
echo "==> Creating storage account"
|
||||
azure storage account create -g $meta_name -l westus --sku-name LRS --kind Storage $meta_name
|
||||
azure storage account create -g $meta_name -l $location --sku-name LRS --kind Storage $meta_name
|
||||
if [ $? -eq 0 ]; then
|
||||
azure_storage_name=$meta_name
|
||||
else
|
||||
|
@ -135,18 +145,10 @@ createStorageAccount() {
|
|||
fi
|
||||
}
|
||||
|
||||
createApplication() {
|
||||
echo "==> Creating application"
|
||||
azure_client_id=$(azure ad app create -n $meta_name -i http://$meta_name --home-page http://$meta_name -p $azure_client_secret --json | jq -r .appId)
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Error creating application: $meta_name @ http://$meta_name"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
createServicePrinciple() {
|
||||
echo "==> Creating service principal"
|
||||
azure ad sp create $azure_client_id
|
||||
azure_object_id=$(azure ad sp create -n $meta_name --home-page http://$meta_name --identifier-uris http://$meta_name/example -p $azure_client_secret --json | jq -r .objectId)
|
||||
azure_client_id=$(azure ad app show -c $meta_name --json | jq -r .[0].appId)
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Error creating service principal: $azure_client_id"
|
||||
exit 1
|
||||
|
@ -155,7 +157,7 @@ createServicePrinciple() {
|
|||
|
||||
createPermissions() {
|
||||
echo "==> Creating permissions"
|
||||
azure role assignment create -o "Owner" --spn http://$meta_name -c /subscriptions/$azure_subscription_id
|
||||
azure role assignment create --objectId $azure_object_id -o "Owner" -c /subscriptions/$azure_subscription_id
|
||||
# We want to use this more conservative scope but it does not work with the
|
||||
# current implementation which uses temporary resource groups
|
||||
# azure role assignment create --spn http://$meta_name -g $azure_group_name -o "API Management Service Contributor"
|
||||
|
@ -169,11 +171,15 @@ showConfigs() {
|
|||
echo ""
|
||||
echo "Use the following configuration for your packer template:"
|
||||
echo ""
|
||||
echo "{"
|
||||
echo " \"client_id\": \"$azure_client_id\","
|
||||
echo " \"client_secret\": \"$azure_client_secret\","
|
||||
echo " \"object_id\": \"$azure_object_id\","
|
||||
echo " \"subscription_id\": \"$azure_subscription_id\","
|
||||
echo " \"tenant_id\": \"$azure_tenant_id\","
|
||||
echo " \"resource_group_name\": \"$azure_group_name\","
|
||||
echo " \"storage_account\": \"$azure_storage_name\","
|
||||
echo " \"subscription_id\": \"$azure_subscription_id\","
|
||||
echo "}"
|
||||
echo ""
|
||||
}
|
||||
|
||||
|
@ -186,6 +192,7 @@ setup() {
|
|||
askSubscription
|
||||
askName
|
||||
askSecret
|
||||
askLocation
|
||||
|
||||
# Some of the resources take a while to converge in the API. To make the
|
||||
# script more reliable we'll add a sleep after we create each resource.
|
||||
|
@ -194,8 +201,6 @@ setup() {
|
|||
sleep 5
|
||||
createStorageAccount
|
||||
sleep 5
|
||||
createApplication
|
||||
sleep 5
|
||||
createServicePrinciple
|
||||
sleep 5
|
||||
createPermissions
|
||||
|
|
|
@ -23,6 +23,11 @@
|
|||
"image_offer": "UbuntuServer",
|
||||
"image_sku": "16.04.0-LTS",
|
||||
|
||||
"azure_tags": {
|
||||
"dept": "engineering",
|
||||
"task": "image deployment"
|
||||
},
|
||||
|
||||
"location": "West US",
|
||||
"vm_size": "Standard_A2"
|
||||
}],
|
||||
|
|
|
@ -0,0 +1,37 @@
|
|||
package googlecomputeexport
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
)
|
||||
|
||||
const BuilderId = "packer.post-processor.googlecompute-export"
|
||||
|
||||
type Artifact struct {
|
||||
paths []string
|
||||
}
|
||||
|
||||
func (*Artifact) BuilderId() string {
|
||||
return BuilderId
|
||||
}
|
||||
|
||||
func (*Artifact) Id() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func (a *Artifact) Files() []string {
|
||||
pathsCopy := make([]string, len(a.paths))
|
||||
copy(pathsCopy, a.paths)
|
||||
return pathsCopy
|
||||
}
|
||||
|
||||
func (a *Artifact) String() string {
|
||||
return fmt.Sprintf("Exported artifacts in: %s", a.paths)
|
||||
}
|
||||
|
||||
func (*Artifact) State(name string) interface{} {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (a *Artifact) Destroy() error {
|
||||
return nil
|
||||
}
|
|
@ -0,0 +1,130 @@
|
|||
package googlecomputeexport
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"strings"
|
||||
|
||||
"github.com/mitchellh/multistep"
|
||||
"github.com/mitchellh/packer/builder/googlecompute"
|
||||
"github.com/mitchellh/packer/common"
|
||||
"github.com/mitchellh/packer/helper/config"
|
||||
"github.com/mitchellh/packer/packer"
|
||||
"github.com/mitchellh/packer/template/interpolate"
|
||||
)
|
||||
|
||||
type Config struct {
|
||||
common.PackerConfig `mapstructure:",squash"`
|
||||
|
||||
Paths []string `mapstructure:"paths"`
|
||||
KeepOriginalImage bool `mapstructure:"keep_input_artifact"`
|
||||
|
||||
ctx interpolate.Context
|
||||
}
|
||||
|
||||
type PostProcessor struct {
|
||||
config Config
|
||||
runner multistep.Runner
|
||||
}
|
||||
|
||||
func (p *PostProcessor) Configure(raws ...interface{}) error {
|
||||
err := config.Decode(&p.config, &config.DecodeOpts{
|
||||
Interpolate: true,
|
||||
InterpolateContext: &p.config.ctx,
|
||||
}, raws...)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (packer.Artifact, bool, error) {
|
||||
ui.Say("Starting googlecompute-export...")
|
||||
ui.Say(fmt.Sprintf("Exporting image to destinations: %v", p.config.Paths))
|
||||
if artifact.BuilderId() != googlecompute.BuilderId {
|
||||
err := fmt.Errorf(
|
||||
"Unknown artifact type: %s\nCan only export from Google Compute Engine builder artifacts.",
|
||||
artifact.BuilderId())
|
||||
return nil, p.config.KeepOriginalImage, err
|
||||
}
|
||||
|
||||
result := &Artifact{paths: p.config.Paths}
|
||||
|
||||
if len(p.config.Paths) > 0 {
|
||||
accountKeyFilePath := artifact.State("AccountFilePath").(string)
|
||||
imageName := artifact.State("ImageName").(string)
|
||||
imageSizeGb := artifact.State("ImageSizeGb").(int64)
|
||||
projectId := artifact.State("ProjectId").(string)
|
||||
zone := artifact.State("BuildZone").(string)
|
||||
|
||||
// Set up instance configuration.
|
||||
instanceName := fmt.Sprintf("%s-exporter", artifact.Id())
|
||||
metadata := map[string]string{
|
||||
"image_name": imageName,
|
||||
"name": instanceName,
|
||||
"paths": strings.Join(p.config.Paths, " "),
|
||||
"startup-script": StartupScript,
|
||||
"zone": zone,
|
||||
}
|
||||
exporterConfig := googlecompute.Config{
|
||||
InstanceName: instanceName,
|
||||
SourceImageProjectId: "debian-cloud",
|
||||
SourceImage: "debian-8-jessie-v20160629",
|
||||
DiskName: instanceName,
|
||||
DiskSizeGb: imageSizeGb + 10,
|
||||
DiskType: "pd-standard",
|
||||
Metadata: metadata,
|
||||
MachineType: "n1-standard-4",
|
||||
Zone: zone,
|
||||
Network: "default",
|
||||
RawStateTimeout: "5m",
|
||||
}
|
||||
exporterConfig.CalcTimeout()
|
||||
|
||||
// Set up credentials and GCE driver.
|
||||
b, err := ioutil.ReadFile(accountKeyFilePath)
|
||||
if err != nil {
|
||||
err = fmt.Errorf("Error fetching account credentials: %s", err)
|
||||
return nil, p.config.KeepOriginalImage, err
|
||||
}
|
||||
accountKeyContents := string(b)
|
||||
googlecompute.ProcessAccountFile(&exporterConfig.Account, accountKeyContents)
|
||||
driver, err := googlecompute.NewDriverGCE(ui, projectId, &exporterConfig.Account)
|
||||
if err != nil {
|
||||
return nil, p.config.KeepOriginalImage, err
|
||||
}
|
||||
|
||||
// Set up the state.
|
||||
state := new(multistep.BasicStateBag)
|
||||
state.Put("config", &exporterConfig)
|
||||
state.Put("driver", driver)
|
||||
state.Put("ui", ui)
|
||||
|
||||
// Build the steps.
|
||||
steps := []multistep.Step{
|
||||
&googlecompute.StepCreateSSHKey{
|
||||
Debug: p.config.PackerDebug,
|
||||
DebugKeyPath: fmt.Sprintf("gce_%s.pem", p.config.PackerBuildName),
|
||||
},
|
||||
&googlecompute.StepCreateInstance{
|
||||
Debug: p.config.PackerDebug,
|
||||
},
|
||||
new(googlecompute.StepWaitInstanceStartup),
|
||||
new(googlecompute.StepTeardownInstance),
|
||||
}
|
||||
|
||||
// Run the steps.
|
||||
if p.config.PackerDebug {
|
||||
p.runner = &multistep.DebugRunner{
|
||||
Steps: steps,
|
||||
PauseFn: common.MultistepDebugFn(ui),
|
||||
}
|
||||
} else {
|
||||
p.runner = &multistep.BasicRunner{Steps: steps}
|
||||
}
|
||||
p.runner.Run(state)
|
||||
}
|
||||
|
||||
return result, p.config.KeepOriginalImage, nil
|
||||
}
|
|
@ -0,0 +1 @@
|
|||
package googlecomputeexport
|
|
@ -0,0 +1,76 @@
|
|||
package googlecomputeexport
|
||||
|
||||
var StartupScript string = `#!/bin/sh
|
||||
|
||||
GetMetadata () {
|
||||
echo "$(curl -f -H "Metadata-Flavor: Google" http://metadata/computeMetadata/v1/instance/attributes/$1 2> /dev/null)"
|
||||
}
|
||||
IMAGENAME=$(GetMetadata image_name)
|
||||
NAME=$(GetMetadata name)
|
||||
DISKNAME=${NAME}-toexport
|
||||
PATHS=$(GetMetadata paths)
|
||||
ZONE=$(GetMetadata zone)
|
||||
|
||||
Exit () {
|
||||
for i in ${PATHS}; do
|
||||
LOGDEST="${i}.exporter.log"
|
||||
echo "Uploading exporter log to ${LOGDEST}..."
|
||||
gsutil -h "Content-Type:text/plain" cp /var/log/daemon.log ${LOGDEST}
|
||||
done
|
||||
exit $1
|
||||
}
|
||||
|
||||
echo "####### Export configuration #######"
|
||||
echo "Image name - ${IMAGENAME}"
|
||||
echo "Instance name - ${NAME}"
|
||||
echo "Instance zone - ${ZONE}"
|
||||
echo "Disk name - ${DISKNAME}"
|
||||
echo "Export paths - ${PATHS}"
|
||||
echo "####################################"
|
||||
|
||||
echo "Creating disk from image to be exported..."
|
||||
if ! gcloud compute disks create ${DISKNAME} --image ${IMAGENAME} --zone ${ZONE}; then
|
||||
echo "Failed to create disk."
|
||||
Exit 1
|
||||
fi
|
||||
|
||||
echo "Attaching disk..."
|
||||
if ! gcloud compute instances attach-disk ${NAME} --disk ${DISKNAME} --device-name toexport --zone ${ZONE}; then
|
||||
echo "Failed to attach disk."
|
||||
Exit 1
|
||||
fi
|
||||
|
||||
echo "Dumping disk..."
|
||||
if ! dd if=/dev/disk/by-id/google-toexport of=disk.raw bs=4096 conv=sparse; then
|
||||
echo "Failed to dump disk to image."
|
||||
Exit 1
|
||||
fi
|
||||
|
||||
echo "Compressing and tar'ing disk image..."
|
||||
if ! tar -czf root.tar.gz disk.raw; then
|
||||
echo "Failed to tar disk image."
|
||||
Exit 1
|
||||
fi
|
||||
|
||||
echo "Detaching disk..."
|
||||
if ! gcloud compute instances detach-disk ${NAME} --disk ${DISKNAME} --zone ${ZONE}; then
|
||||
echo "Failed to detach disk."
|
||||
fi
|
||||
|
||||
FAIL=0
|
||||
echo "Deleting disk..."
|
||||
if ! gcloud compute disks delete ${DISKNAME} --zone ${ZONE}; then
|
||||
echo "Failed to delete disk."
|
||||
FAIL=1
|
||||
fi
|
||||
|
||||
for i in ${PATHS}; do
|
||||
echo "Uploading tar'ed disk image to ${i}..."
|
||||
if ! gsutil -o GSUtil:parallel_composite_upload_threshold=100M cp root.tar.gz ${i}; then
|
||||
echo "Failed to upload image to ${i}."
|
||||
FAIL=1
|
||||
fi
|
||||
done
|
||||
|
||||
Exit ${FAIL}
|
||||
`
|
|
@ -6,6 +6,7 @@ import (
|
|||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"net"
|
||||
|
||||
"github.com/mitchellh/packer/packer"
|
||||
|
@ -100,62 +101,71 @@ func (c *adapter) handleSession(newChannel ssh.NewChannel) error {
|
|||
for req := range in {
|
||||
switch req.Type {
|
||||
case "pty-req":
|
||||
log.Println("ansible provisioner pty-req request")
|
||||
// accept pty-req requests, but don't actually do anything. Necessary for OpenSSH and sudo.
|
||||
req.Reply(true, nil)
|
||||
|
||||
case "env":
|
||||
req.Reply(true, nil)
|
||||
|
||||
req, err := newEnvRequest(req)
|
||||
if err != nil {
|
||||
c.ui.Error(err.Error())
|
||||
req.Reply(false, nil)
|
||||
continue
|
||||
}
|
||||
env = append(env, req.Payload)
|
||||
case "exec":
|
||||
log.Printf("new env request: %s", req.Payload)
|
||||
req.Reply(true, nil)
|
||||
|
||||
case "exec":
|
||||
req, err := newExecRequest(req)
|
||||
if err != nil {
|
||||
c.ui.Error(err.Error())
|
||||
req.Reply(false, nil)
|
||||
close(done)
|
||||
continue
|
||||
}
|
||||
|
||||
if len(req.Payload) > 0 {
|
||||
cmd := &packer.RemoteCmd{
|
||||
Stdin: channel,
|
||||
Stdout: channel,
|
||||
Stderr: channel.Stderr(),
|
||||
Command: string(req.Payload),
|
||||
}
|
||||
log.Printf("new exec request: %s", req.Payload)
|
||||
|
||||
if err := c.comm.Start(cmd); err != nil {
|
||||
c.ui.Error(err.Error())
|
||||
close(done)
|
||||
return
|
||||
}
|
||||
go func(cmd *packer.RemoteCmd, channel ssh.Channel) {
|
||||
cmd.Wait()
|
||||
|
||||
exitStatus := make([]byte, 4)
|
||||
binary.BigEndian.PutUint32(exitStatus, uint32(cmd.ExitStatus))
|
||||
channel.SendRequest("exit-status", false, exitStatus)
|
||||
close(done)
|
||||
}(cmd, channel)
|
||||
if len(req.Payload) == 0 {
|
||||
req.Reply(false, nil)
|
||||
close(done)
|
||||
return
|
||||
}
|
||||
|
||||
cmd := &packer.RemoteCmd{
|
||||
Stdin: channel,
|
||||
Stdout: channel,
|
||||
Stderr: channel.Stderr(),
|
||||
Command: string(req.Payload),
|
||||
}
|
||||
|
||||
if err := c.comm.Start(cmd); err != nil {
|
||||
c.ui.Error(err.Error())
|
||||
req.Reply(false, nil)
|
||||
close(done)
|
||||
return
|
||||
}
|
||||
|
||||
go func(cmd *packer.RemoteCmd, channel ssh.Channel) {
|
||||
cmd.Wait()
|
||||
exitStatus := make([]byte, 4)
|
||||
binary.BigEndian.PutUint32(exitStatus, uint32(cmd.ExitStatus))
|
||||
channel.SendRequest("exit-status", false, exitStatus)
|
||||
close(done)
|
||||
}(cmd, channel)
|
||||
req.Reply(true, nil)
|
||||
|
||||
case "subsystem":
|
||||
req, err := newSubsystemRequest(req)
|
||||
if err != nil {
|
||||
c.ui.Error(err.Error())
|
||||
req.Reply(false, nil)
|
||||
continue
|
||||
}
|
||||
|
||||
log.Printf("new subsystem request: %s", req.Payload)
|
||||
switch req.Payload {
|
||||
case "sftp":
|
||||
c.ui.Say("starting sftp subsystem")
|
||||
req.Reply(true, nil)
|
||||
sftpCmd := c.sftpCmd
|
||||
if len(sftpCmd) == 0 {
|
||||
sftpCmd = "/usr/lib/sftp-server -e"
|
||||
|
@ -167,16 +177,22 @@ func (c *adapter) handleSession(newChannel ssh.NewChannel) error {
|
|||
Command: sftpCmd,
|
||||
}
|
||||
|
||||
c.ui.Say("starting sftp subsystem")
|
||||
if err := c.comm.Start(cmd); err != nil {
|
||||
c.ui.Error(err.Error())
|
||||
req.Reply(false, nil)
|
||||
close(done)
|
||||
return
|
||||
}
|
||||
|
||||
req.Reply(true, nil)
|
||||
go func() {
|
||||
cmd.Wait()
|
||||
close(done)
|
||||
}()
|
||||
|
||||
default:
|
||||
c.ui.Error(fmt.Sprintf("unsupported subsystem requested: %s", req.Payload))
|
||||
req.Reply(false, nil)
|
||||
|
||||
}
|
||||
|
@ -205,6 +221,10 @@ type envRequestPayload struct {
|
|||
Value string
|
||||
}
|
||||
|
||||
func (p envRequestPayload) String() string {
|
||||
return fmt.Sprintf("%s=%s", p.Name, p.Value)
|
||||
}
|
||||
|
||||
func newEnvRequest(raw *ssh.Request) (*envRequest, error) {
|
||||
r := new(envRequest)
|
||||
r.Request = raw
|
||||
|
@ -238,6 +258,10 @@ type execRequest struct {
|
|||
|
||||
type execRequestPayload string
|
||||
|
||||
func (p execRequestPayload) String() string {
|
||||
return string(p)
|
||||
}
|
||||
|
||||
func newExecRequest(raw *ssh.Request) (*execRequest, error) {
|
||||
r := new(execRequest)
|
||||
r.Request = raw
|
||||
|
@ -260,6 +284,10 @@ type subsystemRequest struct {
|
|||
|
||||
type subsystemRequestPayload string
|
||||
|
||||
func (p subsystemRequestPayload) String() string {
|
||||
return string(p)
|
||||
}
|
||||
|
||||
func newSubsystemRequest(raw *ssh.Request) (*subsystemRequest, error) {
|
||||
r := new(subsystemRequest)
|
||||
r.Request = raw
|
||||
|
|
|
@ -68,7 +68,7 @@ body{
|
|||
|
||||
.footer-hashi{
|
||||
display: block;
|
||||
float: none !important;
|
||||
float: right !important;
|
||||
.hashicorp-project{
|
||||
margin-left: -30px;
|
||||
}
|
||||
|
@ -76,10 +76,9 @@ body{
|
|||
|
||||
ul{
|
||||
display: block;
|
||||
width: 100%;
|
||||
li{
|
||||
display: block;
|
||||
float: none;
|
||||
float: left;
|
||||
}
|
||||
|
||||
&.external-links{
|
||||
|
|
|
@ -0,0 +1,62 @@
|
|||
@media only screen
|
||||
and (min-device-width : 768px)
|
||||
and (max-device-width : 1024px)
|
||||
and (orientation : portrait) {
|
||||
#main-content {
|
||||
display: flex;
|
||||
flex-direction: row;
|
||||
|
||||
#sidebar-docs {
|
||||
flex: 3;
|
||||
h2 {
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
ul {
|
||||
padding: 0;
|
||||
li {
|
||||
padding: 10px 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
.docs-body {
|
||||
flex: 7;
|
||||
h2 {
|
||||
margin-top: 20px;
|
||||
}
|
||||
|
||||
.docs-content {
|
||||
padding: 0 0 80px;
|
||||
div.alert {
|
||||
padding: 20px 30px;
|
||||
}
|
||||
|
||||
code {
|
||||
font-size: 12px;
|
||||
}
|
||||
|
||||
pre {
|
||||
margin: 0 -15px;
|
||||
padding: 15px;
|
||||
}
|
||||
|
||||
ul {
|
||||
margin-top: 0;
|
||||
margin-left: 30px;
|
||||
margin-bottom: 10px;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#footer {
|
||||
padding: 20px 0 30px;
|
||||
.edit-page-link {
|
||||
top: -70px;
|
||||
}
|
||||
}
|
||||
|
||||
p {
|
||||
line-height: 1.5;
|
||||
}
|
||||
}
|
|
@ -20,3 +20,6 @@
|
|||
@import "_footer";
|
||||
@import "_components";
|
||||
@import "_sidebar";
|
||||
|
||||
// Fix view on iPad
|
||||
@import "_ipad";
|
||||
|
|
|
@ -109,6 +109,10 @@ builder.
|
|||
launch the resulting AMI(s). By default no additional users other than the
|
||||
user creating the AMI has permissions to launch it.
|
||||
|
||||
- `ami_virtualization_type` (string) - The type of virtualization for the AMI
|
||||
you are building. This option must match the supported virtualization
|
||||
type of `source_ami`. Can be "paravirtual" or "hvm".
|
||||
|
||||
- `associate_public_ip_address` (boolean) - If using a non-default VPC, public
|
||||
IP addresses are not provided by default. If this is toggled, your new
|
||||
instance will get a Public IP.
|
||||
|
|
|
@ -89,6 +89,7 @@ Packer to work:
|
|||
|
||||
``` {.javascript}
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [{
|
||||
"Effect": "Allow",
|
||||
"Action" : [
|
||||
|
@ -114,13 +115,15 @@ Packer to work:
|
|||
"ec2:DescribeSnapshots",
|
||||
"ec2:DescribeImages",
|
||||
"ec2:RegisterImage",
|
||||
"ec2:DeregisterImage",
|
||||
"ec2:CreateTags",
|
||||
"ec2:ModifyImageAttribute",
|
||||
"ec2:GetPasswordData",
|
||||
"ec2:DescribeTags",
|
||||
"ec2:DescribeImageAttribute",
|
||||
"ec2:CopyImage",
|
||||
"ec2:DescribeRegions"
|
||||
"ec2:DescribeRegions",
|
||||
"ec2:ModifyInstanceAttribute"
|
||||
],
|
||||
"Resource" : "*"
|
||||
}]
|
||||
|
|
|
@ -57,6 +57,10 @@ builder.
|
|||
|
||||
### Optional:
|
||||
|
||||
- `azure_tags` (object of name/value strings) - the user can define up to 15 tags. Tag names cannot exceed 512
|
||||
characters, and tag values cannot exceed 256 characters. Tags are applied to every resource deployed by a Packer
|
||||
build, i.e. Resource Group, VM, NIC, VNET, Public IP, KeyVault, etc.
|
||||
|
||||
- `cloud_environment_name` (string) One of `Public`, `China`, `Germany`, or
|
||||
`USGovernment`. Defaults to `Public`. Long forms such as
|
||||
`USGovernmentCloud` and `AzureUSGovernmentCloud` are also supported.
|
||||
|
@ -70,7 +74,8 @@ builder.
|
|||
- `image_url` (string) Specify a custom VHD to use. If this value is set, do not set image_publisher, image_offer,
|
||||
image_sku, or image_version.
|
||||
|
||||
- `tenant_id` (string) The account identifier with which your `client_id` and `subscription_id` are associated. If not specified, `tenant_id` will be looked up using `subscription_id`.
|
||||
- `tenant_id` (string) The account identifier with which your `client_id` and `subscription_id` are associated. If not
|
||||
specified, `tenant_id` will be looked up using `subscription_id`.
|
||||
|
||||
- `object_id` (string) Specify an OAuth Object ID to protect WinRM certificates
|
||||
created at runtime. This variable is required when creating images based on
|
||||
|
@ -125,6 +130,10 @@ Here is a basic example for Azure.
|
|||
"image_publisher": "Canonical",
|
||||
"image_offer": "UbuntuServer",
|
||||
"image_sku": "14.04.4-LTS",
|
||||
|
||||
"azure_tags": {
|
||||
"dept": "engineering"
|
||||
},
|
||||
|
||||
"location": "West US",
|
||||
"vm_size": "Standard_A2"
|
||||
|
|
|
@ -74,6 +74,22 @@ straightforwarded, it is documented here.
|
|||
4. Click "Generate new JSON key" for the Service Account you just created. A
|
||||
JSON file will be downloaded automatically. This is your *account file*.
|
||||
|
||||
### Precedence of Authentication Methods
|
||||
|
||||
Packer looks for credentials in the following places, preferring the first location found:
|
||||
|
||||
1. A `account_file` option in your packer file.
|
||||
|
||||
2. A JSON file (Service Account) whose path is specified by the `GOOGLE_APPLICATION_CREDENTIALS` environment variable.
|
||||
|
||||
3. A JSON file in a location known to the `gcloud` command-line tool. (`gcloud` creates it when it's configured)
|
||||
|
||||
On Windows, this is: `%APPDATA%/gcloud/application_default_credentials.json`.
|
||||
|
||||
On other systems: `$HOME/.config/gcloud/application_default_credentials.json`.
|
||||
|
||||
4. On Google Compute Engine and Google App Engine Managed VMs, it fetches credentials from the metadata server. (Needs a correct VM authentication scope configuration, see above)
|
||||
|
||||
## Basic Example
|
||||
|
||||
Below is a fully functioning example. It doesn't do anything useful, since no
|
||||
|
@ -82,7 +98,7 @@ repackage an existing GCE image. The account_file is obtained in the previous
|
|||
section. If it parses as JSON it is assumed to be the file itself, otherwise it
|
||||
is assumed to be the path to the file containing the JSON.
|
||||
|
||||
``` {.javascript}
|
||||
``` {.json}
|
||||
{
|
||||
"builders": [{
|
||||
"type": "googlecompute",
|
||||
|
@ -146,6 +162,9 @@ builder.
|
|||
- `network` (string) - The Google Compute network to use for the
|
||||
launched instance. Defaults to `"default"`.
|
||||
|
||||
- `omit_external_ip` (boolean) - If true, the instance will not have an external IP.
|
||||
`use_internal_ip` must be true if this property is true.
|
||||
|
||||
- `preemptible` (boolean) - If true, launch a preembtible instance.
|
||||
|
||||
- `region` (string) - The region in which to launch the instance. Defaults to
|
||||
|
|
|
@ -262,6 +262,15 @@ builder and not otherwise conflicting with the qemuargs):
|
|||
qemu-system-x86 -m 1024m --no-acpi -netdev user,id=mynet0,hostfwd=hostip:hostport-guestip:guestport -device virtio-net,netdev=mynet0"
|
||||
</pre>
|
||||
|
||||
\~> **Windows Users:** [QEMU for Windows](https://qemu.weilnetz.de/) builds are available though an environmental variable does need
|
||||
to be set for QEMU for Windows to redirect stdout to the console instead of stdout.txt.
|
||||
|
||||
The following shows the environment variable that needs to be set for Windows QEMU support:
|
||||
|
||||
```json
|
||||
setx SDL_STDIO_REDIRECT=0
|
||||
```
|
||||
|
||||
You can also use the `SSHHostPort` template variable to produce a packer
|
||||
template that can be invoked by `make` in parallel:
|
||||
|
||||
|
|
|
@ -101,8 +101,44 @@ builder.
|
|||
for the VM. By default, this is 40000 (about 40 GB).
|
||||
|
||||
- `export_opts` (array of strings) - Additional options to pass to the
|
||||
`VBoxManage export`. This can be useful for passing product information to
|
||||
include in the resulting appliance file.
|
||||
[VBoxManage export](https://www.virtualbox.org/manual/ch08.html#vboxmanage-export).
|
||||
This can be useful for passing product information to include in the
|
||||
resulting appliance file. Packer JSON configuration file example:
|
||||
|
||||
``` {.json}
|
||||
{
|
||||
"type": "virtualbox-iso",
|
||||
"export_opts":
|
||||
[
|
||||
"--manifest",
|
||||
"--vsys", "0",
|
||||
"--description", "{{user `vm_description`}}",
|
||||
"--version", "{{user `vm_version`}}"
|
||||
],
|
||||
"format": "ova",
|
||||
}
|
||||
```
|
||||
|
||||
A VirtualBox [VM description](https://www.virtualbox.org/manual/ch08.html#idm3756)
|
||||
may contain arbitrary strings; the GUI interprets HTML formatting.
|
||||
However, the JSON format does not allow arbitrary newlines within a
|
||||
value. Add a multi-line description by preparing the string in the
|
||||
shell before the packer call like this (shell `>` continuation
|
||||
character snipped for easier copy & paste):
|
||||
|
||||
``` {.shell}
|
||||
|
||||
vm_description='some
|
||||
multiline
|
||||
description'
|
||||
|
||||
vm_version='0.2.0'
|
||||
|
||||
packer build \
|
||||
-var "vm_description=${vm_description}" \
|
||||
-var "vm_version=${vm_version}" \
|
||||
"packer_conf.json"
|
||||
```
|
||||
|
||||
- `floppy_files` (array of strings) - A list of files to place onto a floppy
|
||||
disk that is attached when the VM is booted. This is most useful for
|
||||
|
|
|
@ -83,8 +83,44 @@ builder.
|
|||
specified, the default is 10 seconds.
|
||||
|
||||
- `export_opts` (array of strings) - Additional options to pass to the
|
||||
`VBoxManage export`. This can be useful for passing product information to
|
||||
include in the resulting appliance file.
|
||||
[VBoxManage export](https://www.virtualbox.org/manual/ch08.html#vboxmanage-export).
|
||||
This can be useful for passing product information to include in the
|
||||
resulting appliance file. Packer JSON configuration file example:
|
||||
|
||||
``` {.json}
|
||||
{
|
||||
"type": "virtualbox-ovf",
|
||||
"export_opts":
|
||||
[
|
||||
"--manifest",
|
||||
"--vsys", "0",
|
||||
"--description", "{{user `vm_description`}}",
|
||||
"--version", "{{user `vm_version`}}"
|
||||
],
|
||||
"format": "ova",
|
||||
}
|
||||
```
|
||||
|
||||
A VirtualBox [VM description](https://www.virtualbox.org/manual/ch08.html#idm3756)
|
||||
may contain arbitrary strings; the GUI interprets HTML formatting.
|
||||
However, the JSON format does not allow arbitrary newlines within a
|
||||
value. Add a multi-line description by preparing the string in the
|
||||
shell before the packer call like this (shell `>` continuation
|
||||
character snipped for easier copy & paste):
|
||||
|
||||
``` {.shell}
|
||||
|
||||
vm_description='some
|
||||
multiline
|
||||
description'
|
||||
|
||||
vm_version='0.2.0'
|
||||
|
||||
packer build \
|
||||
-var "vm_description=${vm_description}" \
|
||||
-var "vm_version=${vm_version}" \
|
||||
"packer_conf.json"
|
||||
```
|
||||
|
||||
- `floppy_files` (array of strings) - A list of files to place onto a floppy
|
||||
disk that is attached when the VM is booted. This is most useful for
|
||||
|
|
|
@ -267,8 +267,11 @@ builder.
|
|||
`vmx_data` first.
|
||||
|
||||
- `vnc_bind_address` (string / IP address) - The IP address that should be binded
|
||||
to for VNC. By default packer will use 127.0.0.1 for this. If you wish to bind
|
||||
to all interfaces use 0.0.0.0
|
||||
to for VNC. By default packer will use 127.0.0.1 for this. If you wish to bind
|
||||
to all interfaces use 0.0.0.0
|
||||
|
||||
- `vnc_disable_password` (boolean) - Don't auto-generate a VNC password that is
|
||||
used to secure the VNC communication with the VM.
|
||||
|
||||
- `vnc_port_min` and `vnc_port_max` (integer) - The minimum and maximum port
|
||||
to use for VNC access to the virtual machine. The builder uses VNC to type
|
||||
|
|
|
@ -148,6 +148,9 @@ builder.
|
|||
- `vnc_bind_address` (string / IP address) - The IP address that should be binded
|
||||
to for VNC. By default packer will use 127.0.0.1 for this.
|
||||
|
||||
- `vnc_disable_password` (boolean) - Don't auto-generate a VNC password that is
|
||||
used to secure the VNC communication with the VM.
|
||||
|
||||
- `vnc_port_min` and `vnc_port_max` (integer) - The minimum and maximum port
|
||||
to use for VNC access to the virtual machine. The builder uses VNC to type
|
||||
the initial `boot_command`. Because Packer generally runs in parallel,
|
||||
|
|
|
@ -22,8 +22,13 @@ continuing. This will allow you to inspect state and so on.
|
|||
In debug mode once the remote instance is instantiated, Packer will emit to the
|
||||
current directory an ephemeral private ssh key as a .pem file. Using that you
|
||||
can `ssh -i <key.pem>` into the remote build instance and see what is going on
|
||||
for debugging. The ephemeral key will be deleted at the end of the packer run
|
||||
during cleanup.
|
||||
for debugging. The key will only be emitted for cloud-based builders. The
|
||||
ephemeral key will be deleted at the end of the packer run during cleanup.
|
||||
|
||||
For a local builder, the SSH session initiated will be visible in the detail
|
||||
provided when `PACKER_LOG=1` environment variable is set prior to a build,
|
||||
and you can connect to the local machine using the userid and password defined
|
||||
in the kickstart or preseed associated with initialzing the local VM.
|
||||
|
||||
### Windows
|
||||
|
||||
|
|
|
@ -0,0 +1,73 @@
|
|||
---
|
||||
description: |
|
||||
The Google Compute Image Exporter post-processor exports an image from a Packer
|
||||
googlecompute builder run and uploads it to Google Cloud Storage. The exported
|
||||
images can be easily shared and uploaded to other Google Cloud Projects.
|
||||
layout: docs
|
||||
page_title: 'Google Compute Image Exporter'
|
||||
...
|
||||
|
||||
# Google Compoute Image Exporter Post-Processor
|
||||
|
||||
Type: `googlecompute-export`
|
||||
|
||||
The Google Compute Image Exporter post-processor exports the resultant image from a
|
||||
googlecompute build as a gzipped tarball to Google Cloud Storage (GCS).
|
||||
|
||||
The exporter uses the same Google Cloud Platform (GCP) project and authentication
|
||||
credentials as the googlecompute build that produced the image. A temporary VM is
|
||||
started in the GCP project using these credentials. The VM mounts the built image as
|
||||
a disk then dumps, compresses, and tars the image. The VM then uploads the tarball
|
||||
to the provided GCS `paths` using the same credentials.
|
||||
|
||||
As such, the authentication credentials that built the image must have write
|
||||
permissions to the GCS `paths`.
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
### Required
|
||||
|
||||
- `paths` (list of string) - The list of GCS paths, e.g.
|
||||
'gs://mybucket/path/to/file.tar.gz', where the image will be exported.
|
||||
|
||||
### Optional
|
||||
|
||||
- `keep_input_artifact` (bool) - If true, do not delete the Google Compute Engine
|
||||
(GCE) image being exported.
|
||||
|
||||
## Basic Example
|
||||
|
||||
The following example builds a GCE image in the project, `my-project`, with an
|
||||
account whose keyfile is `account.json`. After the image build, a temporary VM will
|
||||
be created to export the image as a gzipped tarball to
|
||||
`gs://mybucket1/path/to/file1.tar.gz` and `gs://mybucket2/path/to/file2.tar.gz`.
|
||||
`keep_input_artifact` is true, so the GCE image won't be deleted after the export.
|
||||
|
||||
In order for this example to work, the account associated with `account.json` must
|
||||
have write access to both `gs://mybucket1/path/to/file1.tar.gz` and
|
||||
`gs://mybucket2/path/to/file2.tar.gz`.
|
||||
|
||||
``` {.json}
|
||||
{
|
||||
"builders": [
|
||||
{
|
||||
"type": "googlecompute",
|
||||
"account_file": "account.json",
|
||||
"project_id": "my-project",
|
||||
"source_image": "debian-7-wheezy-v20150127",
|
||||
"zone": "us-central1-a"
|
||||
}
|
||||
],
|
||||
"post-processors": [
|
||||
{
|
||||
"type": "googlecompute-export",
|
||||
"paths": [
|
||||
"gs://mybucket1/path/to/file1.tar.gz",
|
||||
"gs://mybucket2/path/to/file2.tar.gz"
|
||||
]
|
||||
"keep_input_artifact": true
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
|
@ -68,7 +68,7 @@ listed below:
|
|||
- `puppet_server` (string) - Hostname of the Puppet server. By default
|
||||
"puppet" will be used.
|
||||
|
||||
- `staging_directory` (string) - This is the directory where all the
|
||||
- `staging_dir` (string) - This is the directory where all the
|
||||
configuration of Puppet by Packer will be placed. By default this
|
||||
is "/tmp/packer-puppet-server". This directory doesn't need to exist but
|
||||
must have proper permissions so that the SSH user that Packer uses is able
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
description: |
|
||||
Within the template, the builders section contains an array of all the builders
|
||||
that Packer should use to generate a machine images for the template.
|
||||
that Packer should use to generate machine images for the template.
|
||||
layout: docs
|
||||
page_title: 'Templates: Builders'
|
||||
...
|
||||
|
|
|
@ -113,6 +113,9 @@ With a properly validated template. It is time to build your first image. This
|
|||
is done by calling `packer build` with the template file. The output should look
|
||||
similar to below. Note that this process typically takes a few minutes.
|
||||
|
||||
-> **Note:** When using packer on Windows, replace the single-quotes in the
|
||||
command below with double-quotes.
|
||||
|
||||
``` {.text}
|
||||
$ packer build \
|
||||
-var 'aws_access_key=YOUR ACCESS KEY' \
|
||||
|
|
|
@ -51,7 +51,7 @@ is fully configured with dependencies and service discovery pre-baked. This
|
|||
greatly reduces the risk of an unhealthy node in production due to configuration
|
||||
failure at runtime.
|
||||
|
||||
[Serf](https://www.serfdom.io/?utm_source=packer&utm_campaign=HashicorpEcosystem) is
|
||||
[Serf](https://www.serf.io/?utm_source=packer&utm_campaign=HashicorpEcosystem) is
|
||||
a HashiCorp tool for cluster membership and failure detection. Consul uses
|
||||
Serf's gossip protocol as the foundation for service discovery.
|
||||
|
||||
|
|
Loading…
Reference in New Issue