* mapstructure-to-hcl2: when we see a map generate an attribute spec and not a block spec
this will alow to do
tags = {
key = "value"
}
instead of
tags {
key = "value"
}
This will also enable using variables directly for those tags
* generate code
* update tests
This follows #8232 which added the code to generate the code required to parse
HCL files for each packer component.
All old config files of packer will keep on working the same. Packer takes one
argument. When a directory is passed, all files in the folder with a name
ending with “.pkr.hcl” or “.pkr.json” will be parsed using the HCL2 format.
When a file ending with “.pkr.hcl” or “.pkr.json” is passed it will be parsed
using the HCL2 format. For every other case; the old packer style will be used.
## 1. the hcl2template pkg can create a packer.Build from a set of HCL (v2) files
I had to make the packer.coreBuild (which is our one and only packer.Build ) a public struct with public fields
## 2. Components interfaces get a new ConfigSpec Method to read a file from an HCL file.
This is a breaking change for packer plugins.
a packer component can be a: builder/provisioner/post-processor
each component interface now gets a `ConfigSpec() hcldec.ObjectSpec`
which allows packer to tell what is the layout of the hcl2 config meant
to configure that specific component.
This ObjectSpec is sent through the wire (RPC) and a cty.Value is now
sent through the already existing configuration entrypoints:
Provisioner.Prepare(raws ...interface{}) error
Builder.Prepare(raws ...interface{}) ([]string, error)
PostProcessor.Configure(raws ...interface{}) error
close#1768
Example hcl files:
```hcl
// file amazon-ebs-kms-key/run.pkr.hcl
build {
sources = [
"source.amazon-ebs.first",
]
provisioner "shell" {
inline = [
"sleep 5"
]
}
post-processor "shell-local" {
inline = [
"sleep 5"
]
}
}
// amazon-ebs-kms-key/source.pkr.hcl
source "amazon-ebs" "first" {
ami_name = "hcl2-test"
region = "us-east-1"
instance_type = "t2.micro"
kms_key_id = "c729958f-c6ba-44cd-ab39-35ab68ce0a6c"
encrypt_boot = true
source_ami_filter {
filters {
virtualization-type = "hvm"
name = "amzn-ami-hvm-????.??.?.????????-x86_64-gp2"
root-device-type = "ebs"
}
most_recent = true
owners = ["amazon"]
}
launch_block_device_mappings {
device_name = "/dev/xvda"
volume_size = 20
volume_type = "gp2"
delete_on_termination = "true"
}
launch_block_device_mappings {
device_name = "/dev/xvdf"
volume_size = 500
volume_type = "gp2"
delete_on_termination = true
encrypted = true
}
ami_regions = ["eu-central-1"]
run_tags {
Name = "packer-solr-something"
stack-name = "DevOps Tools"
}
communicator = "ssh"
ssh_pty = true
ssh_username = "ec2-user"
associate_public_ip_address = true
}
```
This commit adds a change which ensures that the Session Token
config struct item is removed from log output.
Signed-off-by: Krzysztof Wilczynski <kw@linux.com>
- remove the synthetic default; defaults are established internally by the func
- store ImportImageInput in params var
- only pack the LicenseType into struct if a value has been set
- Import errors were not very useful messages, obtain the import status
from the import task as AWS reports it
- Interpolate s3_key_name as per PR comments (rather than hard-code the
generated parts of the default value)
- Remove descriptions on AWS import job, they are optional anyway.
- s3_key_name is now optional, default is equivilent to
"packer-import-{{timestamp}}"
- Remove restriction on builder used, anything producing an OVA is okay
- Fix task and ova description passed to import API call, correctly
adds timestamp to both
- Documentation updated
- Remove VMware-specific text
- Mark s3_key_name as optional
- Remove s3_key_name from example now it's optional
- Explain the import process more clearly in example
- Tags follows the same approach as the amazon-ebs builder
- Clean up some debug messages
- Improve readability by pulling out AMI id into seperate variable
Note: this duplicates the tag creation code in
builder/amazon/common/step_create_tags.go. Maybe this should be a multistep
post-processor instead, and we re-use steps from the builder.
- S3 object uploaded removed after import (with disable option)
- Indicate to user when import is complete
- Close the source file uploaded after upload is done
- Each step of import process logs a debug message