Merge branch 'master' into oracle_classic_volumes

This commit is contained in:
Matthew Hooker 2018-10-29 10:18:11 -07:00 committed by GitHub
commit f0d875ce3f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
89 changed files with 4065 additions and 3619 deletions

View File

@ -1,5 +1,5 @@
## 1.3.2 (October 26, 2018)
## 1.3.2 (October 29, 2018)
### IMPROVEMENTS:
* builder/alicloud: Add new `disable_stop_instance` option. [GH-6764]
* builder/alicloud: Support adding tags to image. [GH-6719]
@ -37,7 +37,7 @@
### BUG FIXES:
* builder/alicloud: Fix ssh configuration pointer issues that could cause a bug
[GH-6729]
[GH-6720]
* builder/alicloud: Fix type error in step_create_tags [GH-6763]
* builder/amazon: Error validating credentials is no longer obscured by a
region validation error. and some region validation refactors and

View File

@ -9,7 +9,7 @@ import (
var GitCommit string
// The main version number that is being run at the moment.
const Version = "1.3.2"
const Version = "1.4.0"
// A pre-release marker for the version. If this is "" (empty string)
// then it means that it is a final release. Otherwise, this is a pre-release

View File

@ -2,7 +2,7 @@ set :base_url, "https://www.packer.io/"
activate :hashicorp do |h|
h.name = "packer"
h.version = "1.3.1"
h.version = "1.3.2"
h.github_slug = "hashicorp/packer"
h.website_root = "website"
end

View File

@ -1,10 +1,10 @@
---
description: |
There are a handful of terms used throughout the Packer documentation where
the meaning may not be immediately obvious if you haven't used Packer before.
There are a handful of terms used throughout the Packer documentation where the
meaning may not be immediately obvious if you haven't used Packer before.
Luckily, there are relatively few. This page documents all the terminology
required to understand and use Packer. The terminology is in alphabetical
order for quick referencing.
required to understand and use Packer. The terminology is in alphabetical order
for quick referencing.
layout: docs
page_title: Terminology
---
@ -17,39 +17,39 @@ Luckily, there are relatively few. This page documents all the terminology
required to understand and use Packer. The terminology is in alphabetical order
for quick referencing.
- `Artifacts` are the results of a single build, and are usually a set of IDs or
files to represent a machine image. Every builder produces a single artifact.
As an example, in the case of the Amazon EC2 builder, the artifact is a set of
AMI IDs (one per region). For the VMware builder, the artifact is a directory
of files comprising the created virtual machine.
- `Artifacts` are the results of a single build, and are usually a set of IDs
or files to represent a machine image. Every builder produces a single
artifact. As an example, in the case of the Amazon EC2 builder, the
artifact is a set of AMI IDs (one per region). For the VMware builder, the
artifact is a directory of files comprising the created virtual machine.
- `Builds` are a single task that eventually produces an image for a single
platform. Multiple builds run in parallel. Example usage in a sentence: "The
Packer build produced an AMI to run our web application." Or: "Packer is
running the builds now for VMware, AWS, and VirtualBox."
platform. Multiple builds run in parallel. Example usage in a sentence:
"The Packer build produced an AMI to run our web application." Or: "Packer
is running the builds now for VMware, AWS, and VirtualBox."
- `Builders` are components of Packer that are able to create a machine image
for a single platform. Builders read in some configuration and use that to run
and generate a machine image. A builder is invoked as part of a build in order
to create the actual resulting images. Example builders include VirtualBox,
VMware, and Amazon EC2. Builders can be created and added to Packer in the
form of plugins.
for a single platform. Builders read in some configuration and use that to
run and generate a machine image. A builder is invoked as part of a build
in order to create the actual resulting images. Example builders include
VirtualBox, VMware, and Amazon EC2. Builders can be created and added to
Packer in the form of plugins.
- `Commands` are sub-commands for the `packer` program that perform some job. An
example command is "build", which is invoked as `packer build`. Packer ships
with a set of commands out of the box in order to define its command-line
interface.
- `Commands` are sub-commands for the `packer` program that perform some job.
An example command is "build", which is invoked as `packer build`. Packer
ships with a set of commands out of the box in order to define its
command-line interface.
- `Post-processors` are components of Packer that take the result of a builder
or another post-processor and process that to create a new artifact. Examples
of post-processors are compress to compress artifacts, upload to upload
artifacts, etc.
- `Post-processors` are components of Packer that take the result of a
builder or another post-processor and process that to create a new
artifact. Examples of post-processors are compress to compress artifacts,
upload to upload artifacts, etc.
- `Provisioners` are components of Packer that install and configure software
within a running machine prior to that machine being turned into a static
image. They perform the major work of making the image contain useful
software. Example provisioners include shell scripts, Chef, Puppet, etc.
- `Templates` are JSON files which define one or more builds by configuring the
various components of Packer. Packer is able to read a template and use that
information to create multiple machine images in parallel.
- `Templates` are JSON files which define one or more builds by configuring
the various components of Packer. Packer is able to read a template and use
that information to create multiple machine images in parallel.

View File

@ -17,55 +17,59 @@ customized images based on an existing base images.
## Configuration Reference
The following configuration options are available for building Alicloud images.
In addition to the options listed here,
a [communicator](../templates/communicator.html) can be configured for this
In addition to the options listed here, a
[communicator](../templates/communicator.html) can be configured for this
builder.
### Required:
- `access_key` (string) - This is the Alicloud access key. It must be provided,
but it can also be sourced from the `ALICLOUD_ACCESS_KEY` environment
variable.
- `access_key` (string) - This is the Alicloud access key. It must be
provided, but it can also be sourced from the `ALICLOUD_ACCESS_KEY`
environment variable.
- `image_name` (string) - The name of the user-defined image, \[2, 128\] English
or Chinese characters. It must begin with an uppercase/lowercase letter or
a Chinese character, and may contain numbers, `_` or `-`. It cannot begin with
`http://` or `https://`.
- `image_name` (string) - The name of the user-defined image, \[2, 128\]
English or Chinese characters. It must begin with an uppercase/lowercase
letter or a Chinese character, and may contain numbers, `_` or `-`. It
cannot begin with `http://` or `https://`.
- `instance_type` (string) - Type of the instance. For values, see [Instance
Type Table](https://www.alibabacloud.com/help/doc-detail/25378.htm?spm=a3c0i.o25499en.a3.9.14a36ac8iYqKRA).
You can also obtain the latest instance type table by invoking the [Querying
Instance Type Table](https://intl.aliyun.com/help/doc-detail/25620.htm?spm=a3c0i.o25499en.a3.6.Dr1bik)
Type
Table](https://www.alibabacloud.com/help/doc-detail/25378.htm?spm=a3c0i.o25499en.a3.9.14a36ac8iYqKRA).
You can also obtain the latest instance type table by invoking the
[Querying Instance Type
Table](https://intl.aliyun.com/help/doc-detail/25620.htm?spm=a3c0i.o25499en.a3.6.Dr1bik)
interface.
- `region` (string) - This is the Alicloud region. It must be provided, but it
can also be sourced from the `ALICLOUD_REGION` environment variables.
- `region` (string) - This is the Alicloud region. It must be provided, but
it can also be sourced from the `ALICLOUD_REGION` environment variables.
- `secret_key` (string) - This is the Alicloud secret key. It must be provided,
but it can also be sourced from the `ALICLOUD_SECRET_KEY` environment
variable.
- `secret_key` (string) - This is the Alicloud secret key. It must be
provided, but it can also be sourced from the `ALICLOUD_SECRET_KEY`
environment variable.
- `source_image` (string) - This is the base image id which you want to create
your customized images.
- `source_image` (string) - This is the base image id which you want to
create your customized images.
### Optional:
- `force_stop_instance` (boolean) - Whether to force shutdown upon device restart.
The default value is `false`.
- `force_stop_instance` (boolean) - Whether to force shutdown upon device
restart. The default value is `false`.
If it is set to `false`, the system is shut down normally; if it is set to
`true`, the system is forced to shut down.
- `disable_stop_instance` (boolean) - If this option is set to `true`, Packer will not stop the instance
for you, and you need to make sure the instance will be stopped in the final provisioner command. Otherwise,
Packer will timeout while waiting the instance to be stopped. This option is provided for some specific
scenarios that you want to stop the instance by yourself. E.g., Sysprep a windows which may shutdown the instance
within its command. The default value is `false`.
- `disable_stop_instance` (boolean) - If this option is set to `true`, Packer
will not stop the instance for you, and you need to make sure the instance
will be stopped in the final provisioner command. Otherwise, Packer will
timeout while waiting the instance to be stopped. This option is provided
for some specific scenarios that you want to stop the instance by yourself.
E.g., Sysprep a windows which may shutdown the instance within its command.
The default value is `false`.
- `image_copy_names` (array of string) - The name of the destination image, \[2,
128\] English or Chinese characters. It must begin with an uppercase/lowercase
letter or a Chinese character, and may contain numbers, `_` or `-`. It cannot
begin with `http://` or `https://`.
- `image_copy_names` (array of string) - The name of the destination image,
\[2, 128\] English or Chinese characters. It must begin with an
uppercase/lowercase letter or a Chinese character, and may contain numbers,
`_` or `-`. It cannot begin with `http://` or `https://`.
- `image_copy_regions` (array of string) - Copy to the destination regionIds.
@ -73,65 +77,73 @@ builder.
limit of 0 to 256 characters. Leaving it blank means null, which is the
default value. It cannot begin with `http://` or `https://`.
- `image_disk_mappings` (array of image disk mappings) - Add one or more data disks
to the image.
- `image_disk_mappings` (array of image disk mappings) - Add one or more data
disks to the image.
- `disk_category` (string) - Category of the data disk. Optional values are:
- `disk_category` (string) - Category of the data disk. Optional values
are:
- `cloud` - general cloud disk
- `cloud_efficiency` - efficiency cloud disk
- `cloud_ssd` - cloud SSD
Default value: cloud.
- `disk_delete_with_instance` (boolean) - Whether or not the disk is released along with the instance:
- True indicates that when the instance is released, this disk will be released with it
- False indicates that when the instance is released, this disk will be retained.
- `disk_delete_with_instance` (boolean) - Whether or not the disk is
released along with the instance:
- True indicates that when the instance is released, this disk will
be released with it
- False indicates that when the instance is released, this disk will
be retained.
- `disk_description` (string) - The value of disk description is blank by
default. \[2, 256\] characters. The disk description will appear on the
console. It cannot begin with `http://` or `https://`.
- `disk_description` (string) - The value of disk description is blank by default. \[2, 256\] characters. The disk description will appear on the console. It cannot begin with `http://` or `https://`.
- `disk_device` (string) - Device information of the related instance:
such as `/dev/xvdb` It is null unless the Status is In\_use.
- `disk_device` (string) - Device information of the related instance: such as
`/dev/xvdb` It is null unless the Status is In\_use.
- `disk_name` (string) - The value of disk name is blank by default. \[2, 128\]
English or Chinese characters, must begin with an uppercase/lowercase letter
or Chinese character. Can contain numbers, `.`, `_` and `-`. The disk name
will appear on the console. It cannot begin with `http://` or `https://`.
- `disk_name` (string) - The value of disk name is blank by default. \[2,
128\] English or Chinese characters, must begin with an
uppercase/lowercase letter or Chinese character. Can contain numbers,
`.`, `_` and `-`. The disk name will appear on the console. It cannot
begin with `http://` or `https://`.
- `disk_size` (number) - Size of the system disk, in GB, values range:
- `cloud` - 5 ~ 2000
- `cloud_efficiency` - 20 ~ 2048
- `cloud_ssd` - 20 ~ 2048
The value should be equal to or greater than the size of the specific SnapshotId.
The value should be equal to or greater than the size of the specific
SnapshotId.
- `disk_snapshot_id` (string) - Snapshots are used to create the data disk
After this parameter is specified, Size is ignored. The actual size of the
created disk is the size of the specified snapshot.
- `disk_snapshot_id` (string) - Snapshots are used to create the data
disk After this parameter is specified, Size is ignored. The actual
size of the created disk is the size of the specified snapshot.
Snapshots from on or before July 15, 2013 cannot be used to create a disk.
Snapshots from on or before July 15, 2013 cannot be used to create a
disk.
- `image_force_delete` (boolean) - If this value is true, when the target image name
is duplicated with an existing image, it will delete the existing image and
then create the target image, otherwise, the creation will fail. The default
value is false.
- `image_force_delete` (boolean) - If this value is true, when the target
image name is duplicated with an existing image, it will delete the
existing image and then create the target image, otherwise, the creation
will fail. The default value is false.
- `image_force_delete_snapshots` (boolean) - If this value is true, when delete the
duplicated existing image, the source snapshot of this image will be delete
either.
- `image_force_delete_snapshots` (boolean) - If this value is true, when
delete the duplicated existing image, the source snapshot of this image
will be delete either.
- `image_share_account` (array of string) - The IDs of to-be-added Aliyun
accounts to which the image is shared. The number of accounts is 1 to 10. If
number of accounts is greater than 10, this parameter is ignored.
accounts to which the image is shared. The number of accounts is 1 to 10.
If number of accounts is greater than 10, this parameter is ignored.
- `image_version` (string) - The version number of the image, with a length limit
of 1 to 40 English characters.
- `image_version` (string) - The version number of the image, with a length
limit of 1 to 40 English characters.
- `instance_name` (string) - Display name of the instance, which is a string of
2 to 128 Chinese or English characters. It must begin with an
- `instance_name` (string) - Display name of the instance, which is a string
of 2 to 128 Chinese or English characters. It must begin with an
uppercase/lowercase letter or a Chinese character and can contain numerals,
`.`, `_`, or `-`. The instance name is displayed on the Alibaba Cloud
console. If this parameter is not specified, the default value is InstanceId
of the instance. It cannot begin with `http://` or `https://`.
console. If this parameter is not specified, the default value is
InstanceId of the instance. It cannot begin with `http://` or `https://`.
- `internet_charge_type` (string) - Internet charge type, which can be
`PayByTraffic` or `PayByBandwidth`. Optional values:
@ -139,67 +151,75 @@ builder.
- `PayByTraffic`
If this parameter is not specified, the default value is `PayByBandwidth`.
For the regions out of China, currently only support `PayByTraffic`, you must
set it manfully.
For the regions out of China, currently only support `PayByTraffic`, you
must set it manfully.
- `internet_max_bandwidth_out` (string) - Maximum outgoing bandwidth to the public
network, measured in Mbps (Mega bits per second).
- `internet_max_bandwidth_out` (string) - Maximum outgoing bandwidth to the
public network, measured in Mbps (Mega bits per second).
Value range:
- `PayByBandwidth`: \[0, 100\]. If this parameter is not specified, API automatically sets it to 0 Mbps.
- `PayByTraffic`: \[1, 100\]. If this parameter is not specified, an error is returned.
- `PayByBandwidth`: \[0, 100\]. If this parameter is not specified, API
automatically sets it to 0 Mbps.
- `PayByTraffic`: \[1, 100\]. If this parameter is not specified, an
error is returned.
- `io_optimized` (boolean) - Whether an ECS instance is I/O optimized or not.
The default value is `false`.
- `security_group_id` (string) - ID of the security group to which a newly
created instance belongs. Mutual access is allowed between instances in one
security group. If not specified, the newly created instance will be added to
the default security group. If the default group doesnt exist, or the number
of instances in it has reached the maximum limit, a new security group will
be created automatically.
security group. If not specified, the newly created instance will be added
to the default security group. If the default group doesnt exist, or the
number of instances in it has reached the maximum limit, a new security
group will be created automatically.
- `security_group_name` (string) - The security group name. The default value is
blank. \[2, 128\] English or Chinese characters, must begin with an
- `security_group_name` (string) - The security group name. The default value
is blank. \[2, 128\] English or Chinese characters, must begin with an
uppercase/lowercase letter or Chinese character. Can contain numbers, `.`,
`_` or `-`. It cannot begin with `http://` or `https://`.
- `security_token` (string) - STS access token, can be set through template or by exporting
as environment variable such as `export SecurityToken=value`.
- `security_token` (string) - STS access token, can be set through template
or by exporting as environment variable such as
`export SecurityToken=value`.
- `skip_region_validation` (boolean) - The region validation can be skipped if this
value is true, the default value is false.
- `skip_region_validation` (boolean) - The region validation can be skipped
if this value is true, the default value is false.
- `temporary_key_pair_name` (string) - The name of the temporary key pair to
generate. By default, Packer generates a name that looks like `packer_<UUID>`,
where `<UUID>` is a 36 character unique identifier.
generate. By default, Packer generates a name that looks like
`packer_<UUID>`, where `<UUID>` is a 36 character unique identifier.
- `TLSHandshakeTimeout` (int) - When happen "net/http: TLS handshake timeout" problem, set this environment variable
to a bigger such as `export TLSHandshakeTimeout=30`, it will set the TLS handshake timeout value to 30s.
- `TLSHandshakeTimeout` (int) - When happen "net/http: TLS handshake timeout"
problem, set this environment variable to a bigger such as
`export TLSHandshakeTimeout=30`, it will set the TLS handshake timeout
value to 30s.
- `user_data` (string) - The UserData of an instance must be encoded in `Base64`
format, and the maximum size of the raw data is `16 KB`.
- `user_data` (string) - The UserData of an instance must be encoded in
`Base64` format, and the maximum size of the raw data is `16 KB`.
- `user_data_file` (string) - The file name of the userdata.
- `vpc_cidr_block` (string) - Value options: `192.168.0.0/16` and `172.16.0.0/16`.
When not specified, the default value is `172.16.0.0/16`.
- `vpc_cidr_block` (string) - Value options: `192.168.0.0/16` and
`172.16.0.0/16`. When not specified, the default value is `172.16.0.0/16`.
- `vpc_id` (string) - VPC ID allocated by the system.
- `vpc_name` (string) - The VPC name. The default value is blank. \[2, 128\]
English or Chinese characters, must begin with an uppercase/lowercase letter
or Chinese character. Can contain numbers, `_` and `-`. The disk description
will appear on the console. Cannot begin with `http://` or `https://`.
English or Chinese characters, must begin with an uppercase/lowercase
letter or Chinese character. Can contain numbers, `_` and `-`. The disk
description will appear on the console. Cannot begin with `http://` or
`https://`.
- `vswitch_id` (string) - The ID of the VSwitch to be used.
- `zone_id` (string) - ID of the zone to which the disk belongs.
- `ssh_private_ip` (boolean) - If this value is true, packer will connect to the ECS created through private ip
instead of allocating a public ip or an EIP. The default value is false.
- `ssh_private_ip` (boolean) - If this value is true, packer will connect to
the ECS created through private ip instead of allocating a public ip or an
EIP. The default value is false.
- `tags` (object of key/value strings) - Tags applied to the destination image.
- `tags` (object of key/value strings) - Tags applied to the destination
image.
## Basic Example

View File

@ -1,7 +1,7 @@
---
description: |
The amazon-chroot Packer builder is able to create Amazon AMIs backed by an
EBS volume as the root device. For more information on the difference between
The amazon-chroot Packer builder is able to create Amazon AMIs backed by an EBS
volume as the root device. For more information on the difference between
instance storage and EBS-backed instances, storage for the root device section
in the EC2 documentation.
layout: docs
@ -20,34 +20,34 @@ device" section in the EC2
documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device).
The difference between this builder and the `amazon-ebs` builder is that this
builder is able to build an EBS-backed AMI without launching a new EC2 instance.
This can dramatically speed up AMI builds for organizations who need the extra
fast build.
builder is able to build an EBS-backed AMI without launching a new EC2
instance. This can dramatically speed up AMI builds for organizations who need
the extra fast build.
~&gt; **This is an advanced builder** If you're just getting started with
Packer, we recommend starting with the [amazon-ebs
builder](/docs/builders/amazon-ebs.html), which is much easier to use.
The builder does *not* manage AMIs. Once it creates an AMI and stores it in your
account, it is up to you to use, delete, etc., the AMI.
The builder does *not* manage AMIs. Once it creates an AMI and stores it in
your account, it is up to you to use, delete, etc., the AMI.
## How Does it Work?
This builder works by creating a new EBS volume from an existing source AMI and
attaching it into an already-running EC2 instance. Once attached, a
[chroot](https://en.wikipedia.org/wiki/Chroot) is used to provision the system
within that volume. After provisioning, the volume is detached, snapshotted, and
an AMI is made.
within that volume. After provisioning, the volume is detached, snapshotted,
and an AMI is made.
Using this process, minutes can be shaved off the AMI creation process because a
new EC2 instance doesn't need to be launched.
Using this process, minutes can be shaved off the AMI creation process because
a new EC2 instance doesn't need to be launched.
There are some restrictions, however. The host EC2 instance where the volume is
attached to must be a similar system (generally the same OS version, kernel
versions, etc.) as the AMI being built. Additionally, this process is much more
expensive because the EC2 instance must be kept running persistently in order to
build AMIs, whereas the other AMI builders start instances on-demand to build
AMIs as needed.
expensive because the EC2 instance must be kept running persistently in order
to build AMIs, whereas the other AMI builders start instances on-demand to
build AMIs as needed.
## Configuration Reference
@ -69,99 +69,106 @@ each category, the available configuration keys are alphabetized.
how to set this](/docs/builders/amazon.html#specifying-amazon-credentials)
- `source_ami` (string) - The source AMI whose root volume will be copied and
provisioned on the currently running instance. This must be an EBS-backed AMI
with a root volume snapshot that you have access to. Note: this is not used
when `from_scratch` is set to `true`.
provisioned on the currently running instance. This must be an EBS-backed
AMI with a root volume snapshot that you have access to. Note: this is not
used when `from_scratch` is set to `true`.
### Optional:
- `ami_description` (string) - The description to set for the
resulting AMI(s). By default this description is empty. This is a
[template engine](/docs/templates/engine.html),
see [Build template data](#build-template-data) for more information.
- `ami_description` (string) - The description to set for the resulting
AMI(s). By default this description is empty. This is a [template
engine](/docs/templates/engine.html), see [Build template
data](#build-template-data) for more information.
- `ami_groups` (array of strings) - A list of groups that have access to
launch the resulting AMI(s). By default no groups have permission to launch
the AMI. `all` will make the AMI publicly accessible.
- `ami_product_codes` (array of strings) - A list of product codes to
associate with the AMI. By default no product codes are associated with
the AMI.
associate with the AMI. By default no product codes are associated with the
AMI.
- `ami_regions` (array of strings) - A list of regions to copy the AMI to.
Tags and attributes are copied along with the AMI. AMI copying takes time
depending on the size of the AMI, but will generally take many minutes.
- `ami_users` (array of strings) - A list of account IDs that have access to
launch the resulting AMI(s). By default no additional users other than the user creating the AMI has permissions to launch it.
launch the resulting AMI(s). By default no additional users other than the
user creating the AMI has permissions to launch it.
- `ami_virtualization_type` (string) - The type of virtualization for the AMI
you are building. This option is required to register HVM images. Can be
`paravirtual` (default) or `hvm`.
- `chroot_mounts` (array of array of strings) - This is a list of devices
to mount into the chroot environment. This configuration parameter
requires some additional documentation which is in the [Chroot Mounts](#Chroot Mounts)
section. Please read that section for more information on how to
use this.
- `chroot_mounts` (array of array of strings) - This is a list of devices to
mount into the chroot environment. This configuration parameter requires
some additional documentation which is in the [Chroot
Mounts](#Chroot%20Mounts) section. Please read that section for more
information on how to use this.
- `command_wrapper` (string) - How to run shell commands. This defaults to
`{{.Command}}`. This may be useful to set if you want to set environmental
variables or perhaps run it with `sudo` or so on. This is a configuration
template where the `.Command` variable is replaced with the command to
be run. Defaults to `{{.Command}}`.
template where the `.Command` variable is replaced with the command to be
run. Defaults to `{{.Command}}`.
- `copy_files` (array of strings) - Paths to files on the running EC2 instance
that will be copied into the chroot environment prior to provisioning. Defaults
to `/etc/resolv.conf` so that DNS lookups work. Pass an empty list to skip
copying `/etc/resolv.conf`. You may need to do this if you're building
an image that uses systemd.
- `copy_files` (array of strings) - Paths to files on the running EC2
instance that will be copied into the chroot environment prior to
provisioning. Defaults to `/etc/resolv.conf` so that DNS lookups work. Pass
an empty list to skip copying `/etc/resolv.conf`. You may need to do this
if you're building an image that uses systemd.
- `custom_endpoint_ec2` (string) - This option is useful if you use a cloud
provider whose API is compatible with aws EC2. Specify another endpoint
like this `https://ec2.custom.endpoint.com`.
- `decode_authorization_messages` (boolean) - Enable automatic decoding of any
encoded authorization (error) messages using the `sts:DecodeAuthorizationMessage` API.
Note: requires that the effective user/role have permissions to `sts:DecodeAuthorizationMessage`
on resource `*`. Default `false`.
- `decode_authorization_messages` (boolean) - Enable automatic decoding of
any encoded authorization (error) messages using the
`sts:DecodeAuthorizationMessage` API. Note: requires that the effective
user/role have permissions to `sts:DecodeAuthorizationMessage` on resource
`*`. Default `false`.
- `device_path` (string) - The path to the device where the root volume of the
source AMI will be attached. This defaults to "" (empty string), which
- `device_path` (string) - The path to the device where the root volume of
the source AMI will be attached. This defaults to "" (empty string), which
forces Packer to find an open device automatically.
- `ena_support` (boolean) - Enable enhanced networking (ENA but not SriovNetSupport)
on HVM-compatible AMIs. If set, add `ec2:ModifyInstanceAttribute` to your AWS IAM policy.
If false, this will disable enhanced networking in the final AMI as opposed to passing
the setting through unchanged from the source. Note: you must make sure enhanced
networking is enabled on your instance. See [Amazon's documentation on enabling enhanced
- `ena_support` (boolean) - Enable enhanced networking (ENA but not
SriovNetSupport) on HVM-compatible AMIs. If set, add
`ec2:ModifyInstanceAttribute` to your AWS IAM policy. If false, this will
disable enhanced networking in the final AMI as opposed to passing the
setting through unchanged from the source. Note: you must make sure
enhanced networking is enabled on your instance. See [Amazon's
documentation on enabling enhanced
networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking).
- `encrypt_boot` (boolean) - Instruct packer to automatically create a copy of the
AMI with an encrypted boot volume (discarding the initial unencrypted AMI in the
process). Packer will always run this operation, even if the base
AMI has an encrypted boot volume to start with. Default `false`.
- `encrypt_boot` (boolean) - Instruct packer to automatically create a copy
of the AMI with an encrypted boot volume (discarding the initial
unencrypted AMI in the process). Packer will always run this operation,
even if the base AMI has an encrypted boot volume to start with. Default
`false`.
- `force_deregister` (boolean) - Force Packer to first deregister an existing
AMI if one with the same name already exists. Default `false`.
- `force_delete_snapshot` (boolean) - Force Packer to delete snapshots associated with
AMIs, which have been deregistered by `force_deregister`. Default `false`.
- `force_delete_snapshot` (boolean) - Force Packer to delete snapshots
associated with AMIs, which have been deregistered by `force_deregister`.
Default `false`.
- `kms_key_id` (string) - ID, alias or ARN of the KMS key to use for boot volume encryption.
This only applies to the main `region`, other regions where the AMI will be copied
will be encrypted by the default EBS KMS key. For valid formats see _KmsKeyId_ in the
[AWS API docs - CopyImage](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CopyImage.html).
- `kms_key_id` (string) - ID, alias or ARN of the KMS key to use for boot
volume encryption. This only applies to the main `region`, other regions
where the AMI will be copied will be encrypted by the default EBS KMS key.
For valid formats see *KmsKeyId* in the [AWS API docs -
CopyImage](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CopyImage.html).
- `from_scratch` (boolean) - Build a new volume instead of starting from an
existing AMI root volume snapshot. Default `false`. If `true`, `source_ami` is
no longer used and the following options become required:
existing AMI root volume snapshot. Default `false`. If `true`, `source_ami`
is no longer used and the following options become required:
`ami_virtualization_type`, `pre_mount_commands` and `root_volume_size`. The
below options are also required in this mode only:
- `ami_block_device_mappings` (array of block device mappings) - Add one or
more [block device mappings](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html)
more [block device
mappings](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html)
to the AMI. These will be attached when booting a new instance from your
AMI. Your options here may vary depending on the type of VM you use. The
block device mappings allow for the following configuration:
@ -169,22 +176,22 @@ each category, the available configuration keys are alphabetized.
- `delete_on_termination` (boolean) - Indicates whether the EBS volume is
deleted on instance termination. Default `false`. **NOTE**: If this
value is not explicitly set to `true` and volumes are not cleaned up by
an alternative method, additional volumes will accumulate after
every build.
an alternative method, additional volumes will accumulate after every
build.
- `device_name` (string) - The device name exposed to the instance (for
example, `/dev/sdh` or `xvdh`). Required for every device in the
block device mapping.
example, `/dev/sdh` or `xvdh`). Required for every device in the block
device mapping.
- `encrypted` (boolean) - Indicates whether or not to encrypt the volume.
- `kms_key_id` (string) - The ARN for the KMS encryption key. When
specifying `kms_key_id`, `encrypted` needs to be set to `true`.
For valid formats see _KmsKeyId_ in the
[AWS API docs - CopyImage](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CopyImage.html).
specifying `kms_key_id`, `encrypted` needs to be set to `true`. For
valid formats see *KmsKeyId* in the [AWS API docs -
CopyImage](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CopyImage.html).
- `iops` (number) - The number of I/O operations per second (IOPS) that the
volume supports. See the documentation on
- `iops` (number) - The number of I/O operations per second (IOPS) that
the volume supports. See the documentation on
[IOPS](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html)
for more information.
@ -193,43 +200,45 @@ each category, the available configuration keys are alphabetized.
- `snapshot_id` (string) - The ID of the snapshot.
- `virtual_name` (string) - The virtual device name. See the documentation on
[Block Device
- `virtual_name` (string) - The virtual device name. See the
documentation on [Block Device
Mapping](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html)
for more information.
- `volume_size` (number) - The size of the volume, in GiB. Required if not
specifying a `snapshot_id`.
- `volume_size` (number) - The size of the volume, in GiB. Required if
not specifying a `snapshot_id`.
- `volume_type` (string) - The volume type. `gp2` for General Purpose (SSD)
volumes, `io1` for Provisioned IOPS (SSD) volumes, and `standard` for Magnetic
volumes
- `region_kms_key_ids` (map of strings) - a map of regions to copy the ami to,
along with the custom kms key id (alias or arn) to use for encryption for that region.
Keys must match the regions provided in `ami_regions`. If you just want to
encrypt using a default ID, you can stick with `kms_key_id` and `ami_regions`.
If you want a region to be encrypted with that region's default key ID, you can
use an empty string `""` instead of a key id in this map. (e.g. `"us-east-1": ""`)
However, you cannot use default key IDs if you are using this in conjunction with
`snapshot_users` -- in that situation you must use custom keys. For valid formats
see _KmsKeyId_ in the
[AWS API docs - CopyImage](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CopyImage.html).
- `volume_type` (string) - The volume type. `gp2` for General Purpose
(SSD) volumes, `io1` for Provisioned IOPS (SSD) volumes, and `standard`
for Magnetic volumes
- `region_kms_key_ids` (map of strings) - a map of regions to copy the ami
to, along with the custom kms key id (alias or arn) to use for encryption
for that region. Keys must match the regions provided in `ami_regions`. If
you just want to encrypt using a default ID, you can stick with
`kms_key_id` and `ami_regions`. If you want a region to be encrypted with
that region's default key ID, you can use an empty string `""` instead of a
key id in this map. (e.g. `"us-east-1": ""`) However, you cannot use
default key IDs if you are using this in conjunction with `snapshot_users`
-- in that situation you must use custom keys. For valid formats see
*KmsKeyId* in the [AWS API docs -
CopyImage](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CopyImage.html).
- `root_device_name` (string) - The root device name. For example, `xvda`.
- `mfa_code` (string) - The MFA [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm)
code. This should probably be a user variable since it changes all the time.
- `mfa_code` (string) - The MFA
[TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm)
code. This should probably be a user variable since it changes all the
time.
- `mount_path` (string) - The path where the volume will be mounted. This is
where the chroot environment will be. This defaults to
`/mnt/packer-amazon-chroot-volumes/{{.Device}}`. This is a configuration template
where the `.Device` variable is replaced with the name of the device where
the volume is attached.
`/mnt/packer-amazon-chroot-volumes/{{.Device}}`. This is a configuration
template where the `.Device` variable is replaced with the name of the
device where the volume is attached.
- `mount_partition` (string) - The partition number containing the
/ partition. By default this is the first partition of the volume, (for
- `mount_partition` (string) - The partition number containing the /
partition. By default this is the first partition of the volume, (for
example, `xvda1`) but you can designate the entire block device by setting
`"mount_partition": "0"` in your config, which will mount `xvda` instead.
@ -266,37 +275,40 @@ each category, the available configuration keys are alphabetized.
- `root_volume_size` (number) - The size of the root volume in GB for the
chroot environment and the resulting AMI. Default size is the snapshot size
of the `source_ami` unless `from_scratch` is `true`, in which case
this field must be defined.
of the `source_ami` unless `from_scratch` is `true`, in which case this
field must be defined.
- `root_volume_type` (string) - The type of EBS volume for the chroot environment
and resulting AMI. The default value is the type of the `source_ami`, unless
`from_scratch` is `true`, in which case the default value is `gp2`. You can only
specify `io1` if building based on top of a `source_ami` which is also `io1`.
- `root_volume_type` (string) - The type of EBS volume for the chroot
environment and resulting AMI. The default value is the type of the
`source_ami`, unless `from_scratch` is `true`, in which case the default
value is `gp2`. You can only specify `io1` if building based on top of a
`source_ami` which is also `io1`.
- `root_volume_tags` (object of key/value strings) - Tags to apply to the volumes
that are *launched*. This is a
[template engine](/docs/templates/engine.html),
see [Build template data](#build-template-data) for more information.
- `root_volume_tags` (object of key/value strings) - Tags to apply to the
volumes that are *launched*. This is a [template
engine](/docs/templates/engine.html), see [Build template
data](#build-template-data) for more information.
- `skip_region_validation` (boolean) - Set to `true` if you want to skip
validation of the `ami_regions` configuration option. Default `false`.
- `snapshot_tags` (object of key/value strings) - Tags to apply to snapshot.
They will override AMI tags if already applied to snapshot. This is a
[template engine](/docs/templates/engine.html),
see [Build template data](#build-template-data) for more information.
[template engine](/docs/templates/engine.html), see [Build template
data](#build-template-data) for more information.
- `snapshot_groups` (array of strings) - A list of groups that have access to
create volumes from the snapshot(s). By default no groups have permission to create
volumes from the snapshot(s). `all` will make the snapshot publicly accessible.
create volumes from the snapshot(s). By default no groups have permission
to create volumes from the snapshot(s). `all` will make the snapshot
publicly accessible.
- `snapshot_users` (array of strings) - A list of account IDs that have access to
create volumes from the snapshot(s). By default no additional users other than the
user creating the AMI has permissions to create volumes from the backing snapshot(s).
- `snapshot_users` (array of strings) - A list of account IDs that have
access to create volumes from the snapshot(s). By default no additional
users other than the user creating the AMI has permissions to create
volumes from the backing snapshot(s).
- `source_ami_filter` (object) - Filters used to populate the `source_ami` field.
Example:
- `source_ami_filter` (object) - Filters used to populate the `source_ami`
field. Example:
``` json
"source_ami_filter": {
@ -310,39 +322,42 @@ each category, the available configuration keys are alphabetized.
}
```
This selects the most recent Ubuntu 16.04 HVM EBS AMI from Canonical.
NOTE: This will fail unless *exactly* one AMI is returned. In the above
example, `most_recent` will cause this to succeed by selecting the newest image.
This selects the most recent Ubuntu 16.04 HVM EBS AMI from Canonical. NOTE:
This will fail unless *exactly* one AMI is returned. In the above example,
`most_recent` will cause this to succeed by selecting the newest image.
- `filters` (map of strings) - filters used to select a `source_ami`.
NOTE: This will fail unless *exactly* one AMI is returned.
Any filter described in the docs for [DescribeImages](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeImages.html)
NOTE: This will fail unless *exactly* one AMI is returned. Any filter
described in the docs for
[DescribeImages](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeImages.html)
is valid.
- `owners` (array of strings) - Filters the images by their owner. You may
specify one or more AWS account IDs, "self" (which will use the account
whose credentials you are using to run Packer), or an AWS owner alias:
for example, "amazon", "aws-marketplace", or "microsoft".
This option is required for security reasons.
- `owners` (array of strings) - Filters the images by their owner. You
may specify one or more AWS account IDs, "self" (which will use the
account whose credentials you are using to run Packer), or an AWS owner
alias: for example, "amazon", "aws-marketplace", or "microsoft". This
option is required for security reasons.
- `most_recent` (boolean) - Selects the newest created image when `true`.
This is most useful for selecting a daily distro build.
You may set this in place of `source_ami` or in conjunction with it. If you
set this in conjunction with `source_ami`, the `source_ami` will be added to
the filter. The provided `source_ami` must meet all of the filtering criteria
provided in `source_ami_filter`; this pins the AMI returned by the filter,
but will cause Packer to fail if the `source_ami` does not exist.
set this in conjunction with `source_ami`, the `source_ami` will be added
to the filter. The provided `source_ami` must meet all of the filtering
criteria provided in `source_ami_filter`; this pins the AMI returned by the
filter, but will cause Packer to fail if the `source_ami` does not exist.
- `sriov_support` (boolean) - Enable enhanced networking (SriovNetSupport but not ENA)
on HVM-compatible AMIs. If `true`, add `ec2:ModifyInstanceAttribute` to your AWS IAM
policy. Note: you must make sure enhanced networking is enabled on your instance. See [Amazon's
documentation on enabling enhanced networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking).
- `sriov_support` (boolean) - Enable enhanced networking (SriovNetSupport but
not ENA) on HVM-compatible AMIs. If `true`, add
`ec2:ModifyInstanceAttribute` to your AWS IAM policy. Note: you must make
sure enhanced networking is enabled on your instance. See [Amazon's
documentation on enabling enhanced
networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking).
Default `false`.
- `tags` (object of key/value strings) - Tags applied to the AMI. This is a
[template engine](/docs/templates/engine.html),
see [Build template data](#build-template-data) for more information.
[template engine](/docs/templates/engine.html), see [Build template
data](#build-template-data) for more information.
## Basic Example
@ -407,9 +422,10 @@ its internals such as finding an available device.
## Gotchas
### Unmounting the Filesystem
One of the difficulties with using the chroot builder is that your provisioning
scripts must not leave any processes running or packer will be unable to unmount
the filesystem.
scripts must not leave any processes running or packer will be unable to
unmount the filesystem.
For debian based distributions you can setup a
[policy-rc.d](http://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt)
@ -437,46 +453,46 @@ services:
```
### Using Instances with NVMe block devices.
In C5, C5d, M5, and i3.metal instances, EBS volumes are exposed as NVMe block
devices [reference](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/nvme-ebs-volumes.html).
devices
[reference](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/nvme-ebs-volumes.html).
In order to correctly mount these devices, you have to do some extra legwork,
involving the `nvme_device_path` option above. Read that for more information.
A working example for mounting an NVMe device is below:
```
{
"variables": {
"region" : "us-east-2"
},
"builders": [
{
"type": "amazon-chroot",
"region": "{{user `region`}}",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "amzn-ami-hvm-*",
"root-device-type": "ebs"
},
"owners": ["137112412989"],
"most_recent": true
"variables": {
"region" : "us-east-2"
},
"ena_support": true,
"ami_name": "amazon-chroot-test-{{timestamp}}",
"nvme_device_path": "/dev/nvme1n1p",
"device_path": "/dev/sdf"
}
],
"builders": [
{
"type": "amazon-chroot",
"region": "{{user `region`}}",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "amzn-ami-hvm-*",
"root-device-type": "ebs"
},
"owners": ["137112412989"],
"most_recent": true
},
"ena_support": true,
"ami_name": "amazon-chroot-test-{{timestamp}}",
"nvme_device_path": "/dev/nvme1n1p",
"device_path": "/dev/sdf"
}
],
"provisioners": [
{
"type": "shell",
"inline": ["echo Test > /tmp/test.txt"]
"provisioners": [
{
"type": "shell",
"inline": ["echo Test > /tmp/test.txt"]
}
]
}
]
}
```
Note that in the `nvme_device_path` you must end with the `p`; if you try to
define the partition in this path (e.g. `nvme_device_path`: `/dev/nvme1n1p1`)
@ -515,10 +531,14 @@ provisioning commands to install the os and bootloader.
## Build template data
In configuration directives marked as a template engine above, the
following variables are available:
In configuration directives marked as a template engine above, the following
variables are available:
- `BuildRegion` - The region (for example `eu-central-1`) where Packer is building the AMI.
- `SourceAMI` - The source AMI ID (for example `ami-a2412fcd`) used to build the AMI.
- `SourceAMIName` - The source AMI Name (for example `ubuntu/images/ebs-ssd/ubuntu-xenial-16.04-amd64-server-20180306`) used to build the AMI.
- `SourceAMITags` - The source AMI Tags, as a `map[string]string` object.
- `BuildRegion` - The region (for example `eu-central-1`) where Packer is
building the AMI.
- `SourceAMI` - The source AMI ID (for example `ami-a2412fcd`) used to build
the AMI.
- `SourceAMIName` - The source AMI Name (for example
`ubuntu/images/ebs-ssd/ubuntu-xenial-16.04-amd64-server-20180306`) used to
build the AMI.
- `SourceAMITags` - The source AMI Tags, as a `map[string]string` object.

View File

@ -15,23 +15,23 @@ Type: `amazon-ebs`
The `amazon-ebs` Packer builder is able to create Amazon AMIs backed by EBS
volumes for use in [EC2](https://aws.amazon.com/ec2/). For more information on
the difference between EBS-backed instances and instance-store backed instances,
see the ["storage for the root device" section in the EC2
the difference between EBS-backed instances and instance-store backed
instances, see the ["storage for the root device" section in the EC2
documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device).
This builder builds an AMI by launching an EC2 instance from a source AMI,
provisioning that running machine, and then creating an AMI from that machine.
This is all done in your own AWS account. The builder will create temporary
keypairs, security group rules, etc. that provide it temporary access to the
instance while the image is being created. This simplifies configuration quite a
bit.
instance while the image is being created. This simplifies configuration quite
a bit.
The builder does *not* manage AMIs. Once it creates an AMI and stores it in your
account, it is up to you to use, delete, etc. the AMI.
The builder does *not* manage AMIs. Once it creates an AMI and stores it in
your account, it is up to you to use, delete, etc. the AMI.
-&gt; **Note:** Temporary resources are, by default, all created with the prefix
`packer`. This can be useful if you want to restrict the security groups and
key pairs Packer is able to operate on.
-&gt; **Note:** Temporary resources are, by default, all created with the
prefix `packer`. This can be useful if you want to restrict the security groups
and key pairs Packer is able to operate on.
## Configuration Reference
@ -56,8 +56,8 @@ builder.
- `instance_type` (string) - The EC2 instance type to use while building the
AMI, such as `t2.small`.
- `region` (string) - The name of the region, such as `us-east-1`, in which to
launch the EC2 instance to create the AMI.
- `region` (string) - The name of the region, such as `us-east-1`, in which
to launch the EC2 instance to create the AMI.
- `secret_key` (string) - The secret key used to communicate with AWS. [Learn
how to set this](amazon.html#specifying-amazon-credentials)
@ -69,27 +69,28 @@ builder.
### Optional:
- `ami_block_device_mappings` (array of block device mappings) - Add one or
more [block device mappings](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html)
more [block device
mappings](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html)
to the AMI. These will be attached when booting a new instance from your
AMI. To add a block device during the Packer build see
`launch_block_device_mappings` below. Your options here may vary depending
on the type of VM you use. The block device mappings allow for the following
configuration:
on the type of VM you use. The block device mappings allow for the
following configuration:
- `delete_on_termination` (boolean) - Indicates whether the EBS volume is
deleted on instance termination. Default `false`. **NOTE**: If this
value is not explicitly set to `true` and volumes are not cleaned up by
an alternative method, additional volumes will accumulate after
every build.
an alternative method, additional volumes will accumulate after every
build.
- `device_name` (string) - The device name exposed to the instance (for
example, `/dev/sdh` or `xvdh`). Required for every device in the
block device mapping.
example, `/dev/sdh` or `xvdh`). Required for every device in the block
device mapping.
- `encrypted` (boolean) - Indicates whether to encrypt the volume or not
- `iops` (number) - The number of I/O operations per second (IOPS) that the
volume supports. See the documentation on
- `iops` (number) - The number of I/O operations per second (IOPS) that
the volume supports. See the documentation on
[IOPs](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html)
for more information
@ -98,22 +99,22 @@ builder.
- `snapshot_id` (string) - The ID of the snapshot
- `virtual_name` (string) - The virtual device name. See the documentation on
[Block Device
- `virtual_name` (string) - The virtual device name. See the
documentation on [Block Device
Mapping](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html)
for more information
- `volume_size` (number) - The size of the volume, in GiB. Required if not
specifying a `snapshot_id`
- `volume_size` (number) - The size of the volume, in GiB. Required if
not specifying a `snapshot_id`
- `volume_type` (string) - The volume type. `gp2` for General Purpose (SSD)
volumes, `io1` for Provisioned IOPS (SSD) volumes, and `standard` for Magnetic
volumes
- `volume_type` (string) - The volume type. `gp2` for General Purpose
(SSD) volumes, `io1` for Provisioned IOPS (SSD) volumes, and `standard`
for Magnetic volumes
- `ami_description` (string) - The description to set for the
resulting AMI(s). By default this description is empty. This is a
[template engine](../templates/engine.html),
see [Build template data](#build-template-data) for more information.
- `ami_description` (string) - The description to set for the resulting
AMI(s). By default this description is empty. This is a [template
engine](../templates/engine.html), see [Build template
data](#build-template-data) for more information.
- `ami_groups` (array of strings) - A list of groups that have access to
launch the resulting AMI(s). By default no groups have permission to launch
@ -121,8 +122,8 @@ builder.
accept any value other than `all`.
- `ami_product_codes` (array of strings) - A list of product codes to
associate with the AMI. By default no product codes are associated with
the AMI.
associate with the AMI. By default no product codes are associated with the
AMI.
- `ami_regions` (array of strings) - A list of regions to copy the AMI to.
Tags and attributes are copied along with the AMI. AMI copying takes time
@ -133,41 +134,45 @@ builder.
user creating the AMI has permissions to launch it.
- `ami_virtualization_type` (string) - The type of virtualization for the AMI
you are building. This option must match the supported virtualization
type of `source_ami`. Can be `paravirtual` or `hvm`.
you are building. This option must match the supported virtualization type
of `source_ami`. Can be `paravirtual` or `hvm`.
- `associate_public_ip_address` (boolean) - If using a non-default VPC, public
IP addresses are not provided by default. If this is toggled, your new
instance will get a Public IP.
- `associate_public_ip_address` (boolean) - If using a non-default VPC,
public IP addresses are not provided by default. If this is toggled, your
new instance will get a Public IP.
- `availability_zone` (string) - Destination availability zone to launch
instance in. Leave this empty to allow Amazon to auto-assign.
- `block_duration_minutes` (int64) - Requires `spot_price` to
be set. The required duration for the Spot Instances (also known as Spot blocks).
This value must be a multiple of 60 (60, 120, 180, 240, 300, or 360).
You can't specify an Availability Zone group or a launch group if you specify a duration.
- `block_duration_minutes` (int64) - Requires `spot_price` to be set. The
required duration for the Spot Instances (also known as Spot blocks). This
value must be a multiple of 60 (60, 120, 180, 240, 300, or 360). You can't
specify an Availability Zone group or a launch group if you specify a
duration.
- `custom_endpoint_ec2` (string) - This option is useful if you use a cloud
provider whose API is compatible with aws EC2. Specify another endpoint
like this `https://ec2.custom.endpoint.com`.
- `decode_authorization_messages` (boolean) - Enable automatic decoding of any
encoded authorization (error) messages using the `sts:DecodeAuthorizationMessage` API.
Note: requires that the effective user/role have permissions to `sts:DecodeAuthorizationMessage`
on resource `*`. Default `false`.
- `decode_authorization_messages` (boolean) - Enable automatic decoding of
any encoded authorization (error) messages using the
`sts:DecodeAuthorizationMessage` API. Note: requires that the effective
user/role have permissions to `sts:DecodeAuthorizationMessage` on resource
`*`. Default `false`.
- `disable_stop_instance` (boolean) - Packer normally stops the build instance
after all provisioners have run. For Windows instances, it is sometimes
desirable to [run Sysprep](http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ami-create-standard.html)
which will stop the instance for you. If this is set to `true`, Packer *will not*
stop the instance but will assume that you will send the stop signal
yourself through your final provisioner. You can do this with a
[windows-shell provisioner](https://www.packer.io/docs/provisioners/windows-shell.html).
- `disable_stop_instance` (boolean) - Packer normally stops the build
instance after all provisioners have run. For Windows instances, it is
sometimes desirable to [run
Sysprep](http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ami-create-standard.html)
which will stop the instance for you. If this is set to `true`, Packer
*will not* stop the instance but will assume that you will send the stop
signal yourself through your final provisioner. You can do this with a
[windows-shell
provisioner](https://www.packer.io/docs/provisioners/windows-shell.html).
Note that Packer will still wait for the instance to be stopped, and failing
to send the stop signal yourself, when you have set this flag to `true`,
will cause a timeout.
Note that Packer will still wait for the instance to be stopped, and
failing to send the stop signal yourself, when you have set this flag to
`true`, will cause a timeout.
Example of a valid shutdown command:
@ -182,25 +187,26 @@ builder.
Optimized](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html).
Default `false`.
- `ena_support` (boolean) - Enable enhanced networking (ENA but not SriovNetSupport)
on HVM-compatible AMIs. If set, add `ec2:ModifyInstanceAttribute` to your AWS IAM policy.
If false, this will disable enhanced networking in the final AMI as opposed to passing
the setting through unchanged from the source. Note: you must make sure enhanced
networking is enabled on your instance. See [Amazon's documentation on enabling enhanced
- `ena_support` (boolean) - Enable enhanced networking (ENA but not
SriovNetSupport) on HVM-compatible AMIs. If set, add
`ec2:ModifyInstanceAttribute` to your AWS IAM policy. If false, this will
disable enhanced networking in the final AMI as opposed to passing the
setting through unchanged from the source. Note: you must make sure
enhanced networking is enabled on your instance. See [Amazon's
documentation on enabling enhanced
networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking).
- `enable_t2_unlimited` (boolean) - Enabling T2 Unlimited allows the source
instance to burst additional CPU beyond its available [CPU Credits]
(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-credits-baseline-concepts.html)
for as long as the demand exists.
This is in contrast to the standard configuration that only allows an
instance to consume up to its available CPU Credits.
See the AWS documentation for [T2 Unlimited]
(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-unlimited.html)
instance to burst additional CPU beyond its available \[CPU Credits\]
(<https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-credits-baseline-concepts.html>)
for as long as the demand exists. This is in contrast to the standard
configuration that only allows an instance to consume up to its available
CPU Credits. See the AWS documentation for \[T2 Unlimited\]
(<https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-unlimited.html>)
and the **T2 Unlimited Pricing** section of the [Amazon EC2 On-Demand
Pricing](https://aws.amazon.com/ec2/pricing/on-demand/) document for more
information.
By default this option is disabled and Packer will set up a [T2
information. By default this option is disabled and Packer will set up a
[T2
Standard](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-std.html)
instance instead.
@ -210,25 +216,27 @@ builder.
Attempting to do so will cause an error.
!&gt; **Warning!** Additional costs may be incurred by enabling T2
Unlimited - even for instances that would usually qualify for the
[AWS Free Tier](https://aws.amazon.com/free/).
Unlimited - even for instances that would usually qualify for the [AWS Free
Tier](https://aws.amazon.com/free/).
- `encrypt_boot` (boolean) - Instruct packer to automatically create a copy of the
AMI with an encrypted boot volume (discarding the initial unencrypted AMI in the
process). Packer will always run this operation, even if the base
AMI has an encrypted boot volume to start with. Default `false`.
- `encrypt_boot` (boolean) - Instruct packer to automatically create a copy
of the AMI with an encrypted boot volume (discarding the initial
unencrypted AMI in the process). Packer will always run this operation,
even if the base AMI has an encrypted boot volume to start with. Default
`false`.
- `force_delete_snapshot` (boolean) - Force Packer to delete snapshots associated with
AMIs, which have been deregistered by `force_deregister`. Default `false`.
- `force_delete_snapshot` (boolean) - Force Packer to delete snapshots
associated with AMIs, which have been deregistered by `force_deregister`.
Default `false`.
- `force_deregister` (boolean) - Force Packer to first deregister an existing
AMI if one with the same name already exists. Default `false`.
- `kms_key_id` (string) - ID, alias or ARN of the KMS key to use for boot volume encryption.
This only applies to the main `region`, other regions where the AMI will be copied
will be encrypted by the default EBS KMS key. For valid formats
see _KmsKeyId_ in the
[AWS API docs - CopyImage](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CopyImage.html).
- `kms_key_id` (string) - ID, alias or ARN of the KMS key to use for boot
volume encryption. This only applies to the main `region`, other regions
where the AMI will be copied will be encrypted by the default EBS KMS key.
For valid formats see *KmsKeyId* in the [AWS API docs -
CopyImage](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CopyImage.html).
- `iam_instance_profile` (string) - The name of an [IAM instance
profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html)
@ -243,39 +251,42 @@ builder.
new AMI, the instance automatically launches with these additional volumes,
and will restore them from snapshots taken from the source instance.
- `mfa_code` (string) - The MFA [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm)
code. This should probably be a user variable since it changes all the time.
- `mfa_code` (string) - The MFA
[TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm)
code. This should probably be a user variable since it changes all the
time.
- `profile` (string) - The profile to use in the shared credentials file for
AWS. See Amazon's documentation on [specifying
profiles](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-profiles)
for more details.
- `region_kms_key_ids` (map of strings) - a map of regions to copy the ami to,
along with the custom kms key id (alias or arn) to use for encryption for that region.
Keys must match the regions provided in `ami_regions`. If you just want to
encrypt using a default ID, you can stick with `kms_key_id` and `ami_regions`.
If you want a region to be encrypted with that region's default key ID, you can
use an empty string `""` instead of a key id in this map. (e.g. `"us-east-1": ""`)
However, you cannot use default key IDs if you are using this in conjunction with
`snapshot_users` -- in that situation you must use custom keys. For valid formats
see _KmsKeyId_ in the
[AWS API docs - CopyImage](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CopyImage.html).
- `region_kms_key_ids` (map of strings) - a map of regions to copy the ami
to, along with the custom kms key id (alias or arn) to use for encryption
for that region. Keys must match the regions provided in `ami_regions`. If
you just want to encrypt using a default ID, you can stick with
`kms_key_id` and `ami_regions`. If you want a region to be encrypted with
that region's default key ID, you can use an empty string `""` instead of a
key id in this map. (e.g. `"us-east-1": ""`) However, you cannot use
default key IDs if you are using this in conjunction with `snapshot_users`
-- in that situation you must use custom keys. For valid formats see
*KmsKeyId* in the [AWS API docs -
CopyImage](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CopyImage.html).
- `run_tags` (object of key/value strings) - Tags to apply to the instance
that is *launched* to create the AMI. These tags are *not* applied to the
resulting AMI unless they're duplicated in `tags`. This is a
[template engine](../templates/engine.html),
see [Build template data](#build-template-data) for more information.
resulting AMI unless they're duplicated in `tags`. This is a [template
engine](../templates/engine.html), see [Build template
data](#build-template-data) for more information.
- `run_volume_tags` (object of key/value strings) - Tags to apply to the volumes
that are *launched* to create the AMI. These tags are *not* applied to the
resulting AMI unless they're duplicated in `tags`. This is a
[template engine](../templates/engine.html),
see [Build template data](#build-template-data) for more information.
- `run_volume_tags` (object of key/value strings) - Tags to apply to the
volumes that are *launched* to create the AMI. These tags are *not* applied
to the resulting AMI unless they're duplicated in `tags`. This is a
[template engine](../templates/engine.html), see [Build template
data](#build-template-data) for more information.
- `security_group_id` (string) - The ID (*not* the name) of the security group
to assign to the instance. By default this is not set and Packer will
- `security_group_id` (string) - The ID (*not* the name) of the security
group to assign to the instance. By default this is not set and Packer will
automatically create a new temporary security group to allow SSH access.
Note that if this is specified, you must be sure the security group allows
access to the `ssh_port` given below.
@ -284,8 +295,8 @@ builder.
described above. Note that if this is specified, you must omit the
`security_group_id`.
- `security_group_filter` (object) - Filters used to populate the `security_group_ids` field.
Example:
- `security_group_filter` (object) - Filters used to populate the
`security_group_ids` field. Example:
``` json
{
@ -299,34 +310,37 @@ builder.
This selects the SG's with tag `Class` with the value `packer`.
- `filters` (map of strings) - filters used to select a `security_group_ids`.
Any filter described in the docs for [DescribeSecurityGroups](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSecurityGroups.html)
- `filters` (map of strings) - filters used to select a
`security_group_ids`. Any filter described in the docs for
[DescribeSecurityGroups](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSecurityGroups.html)
is valid.
`security_group_ids` take precedence over this.
- `shutdown_behavior` (string) - Automatically terminate instances on shutdown
in case Packer exits ungracefully. Possible values are "stop" and "terminate",
default is `stop`.
- `shutdown_behavior` (string) - Automatically terminate instances on
shutdown in case Packer exits ungracefully. Possible values are "stop" and
"terminate", default is `stop`.
- `skip_region_validation` (boolean) - Set to true if you want to skip
validation of the region configuration option. Default `false`.
- `snapshot_groups` (array of strings) - A list of groups that have access to
create volumes from the snapshot(s). By default no groups have permission to create
volumes from the snapshot(s). `all` will make the snapshot publicly accessible.
create volumes from the snapshot(s). By default no groups have permission
to create volumes from the snapshot(s). `all` will make the snapshot
publicly accessible.
- `snapshot_users` (array of strings) - A list of account IDs that have access to
create volumes from the snapshot(s). By default no additional users other than the
user creating the AMI has permissions to create volumes from the backing snapshot(s).
- `snapshot_users` (array of strings) - A list of account IDs that have
access to create volumes from the snapshot(s). By default no additional
users other than the user creating the AMI has permissions to create
volumes from the backing snapshot(s).
- `snapshot_tags` (object of key/value strings) - Tags to apply to snapshot.
They will override AMI tags if already applied to snapshot. This is a
[template engine](../templates/engine.html),
see [Build template data](#build-template-data) for more information.
[template engine](../templates/engine.html), see [Build template
data](#build-template-data) for more information.
- `source_ami_filter` (object) - Filters used to populate the `source_ami` field.
Example:
- `source_ami_filter` (object) - Filters used to populate the `source_ami`
field. Example:
``` json
{
@ -342,82 +356,85 @@ builder.
}
```
This selects the most recent Ubuntu 16.04 HVM EBS AMI from Canonical.
NOTE: This will fail unless *exactly* one AMI is returned. In the above
example, `most_recent` will cause this to succeed by selecting the newest image.
This selects the most recent Ubuntu 16.04 HVM EBS AMI from Canonical. NOTE:
This will fail unless *exactly* one AMI is returned. In the above example,
`most_recent` will cause this to succeed by selecting the newest image.
- `filters` (map of strings) - filters used to select a `source_ami`.
NOTE: This will fail unless *exactly* one AMI is returned.
Any filter described in the docs for [DescribeImages](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeImages.html)
NOTE: This will fail unless *exactly* one AMI is returned. Any filter
described in the docs for
[DescribeImages](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeImages.html)
is valid.
- `owners` (array of strings) - Filters the images by their owner. You may
specify one or more AWS account IDs, "self" (which will use the account
whose credentials you are using to run Packer), or an AWS owner alias:
for example, `amazon`, `aws-marketplace`, or `microsoft`.
This option is required for security reasons.
- `owners` (array of strings) - Filters the images by their owner. You
may specify one or more AWS account IDs, "self" (which will use the
account whose credentials you are using to run Packer), or an AWS owner
alias: for example, `amazon`, `aws-marketplace`, or `microsoft`. This
option is required for security reasons.
- `most_recent` (boolean) - Selects the newest created image when true.
This is most useful for selecting a daily distro build.
You may set this in place of `source_ami` or in conjunction with it. If you
set this in conjunction with `source_ami`, the `source_ami` will be added to
the filter. The provided `source_ami` must meet all of the filtering criteria
provided in `source_ami_filter`; this pins the AMI returned by the filter,
but will cause Packer to fail if the `source_ami` does not exist.
set this in conjunction with `source_ami`, the `source_ami` will be added
to the filter. The provided `source_ami` must meet all of the filtering
criteria provided in `source_ami_filter`; this pins the AMI returned by the
filter, but will cause Packer to fail if the `source_ami` does not exist.
- `spot_price` (string) - The maximum hourly price to pay for a spot instance
to create the AMI. Spot instances are a type of instance that EC2 starts
when the current spot price is less than the maximum price you specify. Spot
price will be updated based on available spot instance capacity and current
spot instance requests. It may save you some costs. You can set this to
`auto` for Packer to automatically discover the best spot price or to "0"
to use an on demand instance (default).
when the current spot price is less than the maximum price you specify.
Spot price will be updated based on available spot instance capacity and
current spot instance requests. It may save you some costs. You can set
this to `auto` for Packer to automatically discover the best spot price or
to "0" to use an on demand instance (default).
- `spot_price_auto_product` (string) - Required if `spot_price` is set
to `auto`. This tells Packer what sort of AMI you're launching to find the
best spot price. This must be one of: `Linux/UNIX`, `SUSE Linux`, `Windows`,
`Linux/UNIX (Amazon VPC)`, `SUSE Linux (Amazon VPC)`, `Windows (Amazon VPC)`
- `spot_price_auto_product` (string) - Required if `spot_price` is set to
`auto`. This tells Packer what sort of AMI you're launching to find the
best spot price. This must be one of: `Linux/UNIX`, `SUSE Linux`,
`Windows`, `Linux/UNIX (Amazon VPC)`, `SUSE Linux (Amazon VPC)`,
`Windows (Amazon VPC)`
- `spot_tags` (object of key/value strings) - Requires `spot_price` to
be set. This tells Packer to apply tags to the spot request that is
issued.
- `spot_tags` (object of key/value strings) - Requires `spot_price` to be
set. This tells Packer to apply tags to the spot request that is issued.
- `sriov_support` (boolean) - Enable enhanced networking (SriovNetSupport but not ENA)
on HVM-compatible AMIs. If true, add `ec2:ModifyInstanceAttribute` to your AWS IAM
policy. Note: you must make sure enhanced networking is enabled on your instance. See [Amazon's
documentation on enabling enhanced networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking).
- `sriov_support` (boolean) - Enable enhanced networking (SriovNetSupport but
not ENA) on HVM-compatible AMIs. If true, add `ec2:ModifyInstanceAttribute`
to your AWS IAM policy. Note: you must make sure enhanced networking is
enabled on your instance. See [Amazon's documentation on enabling enhanced
networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking).
Default `false`.
- `ssh_keypair_name` (string) - If specified, this is the key that will be
used for SSH with the machine. The key must match a key pair name loaded
up into Amazon EC2. By default, this is blank, and Packer will
generate a temporary keypair unless
used for SSH with the machine. The key must match a key pair name loaded up
into Amazon EC2. By default, this is blank, and Packer will generate a
temporary keypair unless
[`ssh_password`](../templates/communicator.html#ssh_password) is used.
[`ssh_private_key_file`](../templates/communicator.html#ssh_private_key_file)
or `ssh_agent_auth` must be specified when `ssh_keypair_name` is utilized.
- `ssh_agent_auth` (boolean) - If true, the local SSH agent will be used to
authenticate connections to the source instance. No temporary keypair will
be created, and the values of `ssh_password` and `ssh_private_key_file` will
be ignored. To use this option with a key pair already configured in the source
AMI, leave the `ssh_keypair_name` blank. To associate an existing key pair
in AWS with the source instance, set the `ssh_keypair_name` field to the name
of the key pair.
be created, and the values of `ssh_password` and `ssh_private_key_file`
will be ignored. To use this option with a key pair already configured in
the source AMI, leave the `ssh_keypair_name` blank. To associate an
existing key pair in AWS with the source instance, set the
`ssh_keypair_name` field to the name of the key pair.
- `ssh_private_ip` (boolean) - No longer supported. See
[`ssh_interface`](#ssh_interface). A fixer exists to migrate.
- `ssh_interface` (string) - One of `public_ip`, `private_ip`,
`public_dns`, or `private_dns`. If set, either the public IP address,
private IP address, public DNS name or private DNS name will used as the host for SSH.
The default behaviour if inside a VPC is to use the public IP address if available,
otherwise the private IP address will be used. If not in a VPC the public DNS name
will be used. Also works for WinRM.
- `ssh_interface` (string) - One of `public_ip`, `private_ip`, `public_dns`,
or `private_dns`. If set, either the public IP address, private IP address,
public DNS name or private DNS name will used as the host for SSH. The
default behaviour if inside a VPC is to use the public IP address if
available, otherwise the private IP address will be used. If not in a VPC
the public DNS name will be used. Also works for WinRM.
Where Packer is configured for an outbound proxy but WinRM traffic should be direct,
`ssh_interface` must be set to `private_dns` and `<region>.compute.internal` included
in the `NO_PROXY` environment variable.
Where Packer is configured for an outbound proxy but WinRM traffic should
be direct, `ssh_interface` must be set to `private_dns` and
`<region>.compute.internal` included in the `NO_PROXY` environment
variable.
- `subnet_id` (string) - If using VPC, the ID of the subnet, such as
`subnet-12345def`, where Packer will launch the EC2 instance. This field is
@ -438,14 +455,15 @@ builder.
}
```
This selects the Subnet with tag `Class` with the value `build`, which has
the most free IP addresses.
NOTE: This will fail unless *exactly* one Subnet is returned. By using
`most_free` or `random` one will be selected from those matching the filter.
This selects the Subnet with tag `Class` with the value `build`, which has
the most free IP addresses. NOTE: This will fail unless *exactly* one
Subnet is returned. By using `most_free` or `random` one will be selected
from those matching the filter.
- `filters` (map of strings) - filters used to select a `subnet_id`.
NOTE: This will fail unless *exactly* one Subnet is returned.
Any filter described in the docs for [DescribeSubnets](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSubnets.html)
NOTE: This will fail unless *exactly* one Subnet is returned. Any
filter described in the docs for
[DescribeSubnets](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSubnets.html)
is valid.
- `most_free` (boolean) - The Subnet with the most free IPv4 addresses
@ -456,19 +474,19 @@ builder.
`subnet_id` take precedence over this.
- `tags` (object of key/value strings) - Tags applied to the AMI and
relevant snapshots. This is a
[template engine](../templates/engine.html),
see [Build template data](#build-template-data) for more information.
- `tags` (object of key/value strings) - Tags applied to the AMI and relevant
snapshots. This is a [template engine](../templates/engine.html), see
[Build template data](#build-template-data) for more information.
- `temporary_key_pair_name` (string) - The name of the temporary key pair
to generate. By default, Packer generates a name that looks like
- `temporary_key_pair_name` (string) - The name of the temporary key pair to
generate. By default, Packer generates a name that looks like
`packer_<UUID>`, where &lt;UUID&gt; is a 36 character unique identifier.
- `temporary_security_group_source_cidr` (string) - An IPv4 CIDR block to be authorized
access to the instance, when packer is creating a temporary security group.
The default is `0.0.0.0/0` (i.e., allow any IPv4 source). This is only used
when `security_group_id` or `security_group_ids` is not specified.
- `temporary_security_group_source_cidr` (string) - An IPv4 CIDR block to be
authorized access to the instance, when packer is creating a temporary
security group. The default is `0.0.0.0/0` (i.e., allow any IPv4 source).
This is only used when `security_group_id` or `security_group_ids` is not
specified.
- `token` (string) - The access token to use. This is different from the
access key and secret key. If you're not sure what this is, then you
@ -483,9 +501,9 @@ builder.
data when launching the instance.
- `vpc_id` (string) - If launching into a VPC subnet, Packer needs the VPC ID
in order to create a temporary security group within the VPC. Requires `subnet_id`
to be set. If this field is left blank, Packer will try to get the VPC ID from the
`subnet_id`.
in order to create a temporary security group within the VPC. Requires
`subnet_id` to be set. If this field is left blank, Packer will try to get
the VPC ID from the `subnet_id`.
- `vpc_filter` (object) - Filters used to populate the `vpc_id` field.
Example:
@ -502,24 +520,27 @@ builder.
}
```
This selects the VPC with tag `Class` with the value `build`, which is not the
default VPC, and have a IPv4 CIDR block of `/24`.
NOTE: This will fail unless *exactly* one VPC is returned.
This selects the VPC with tag `Class` with the value `build`, which is not
the default VPC, and have a IPv4 CIDR block of `/24`. NOTE: This will fail
unless *exactly* one VPC is returned.
- `filters` (map of strings) - filters used to select a `vpc_id`.
NOTE: This will fail unless *exactly* one VPC is returned.
Any filter described in the docs for [DescribeVpcs](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVpcs.html)
- `filters` (map of strings) - filters used to select a `vpc_id`. NOTE:
This will fail unless *exactly* one VPC is returned. Any filter
described in the docs for
[DescribeVpcs](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVpcs.html)
is valid.
`vpc_id` take precedence over this.
- `windows_password_timeout` (string) - The timeout for waiting for a Windows
password for Windows instances. Defaults to 20 minutes. Example value: `10m`
password for Windows instances. Defaults to 20 minutes. Example value:
`10m`
## Basic Example
Here is a basic example. You will need to provide access keys, and may need to
change the AMI IDs according to what images exist at the time the template is run:
change the AMI IDs according to what images exist at the time the template is
run:
``` json
{
@ -538,10 +559,11 @@ change the AMI IDs according to what images exist at the time the template is ru
environmental variables. See the configuration reference in the section above
for more information on what environmental variables Packer will look for.
Further information on locating AMI IDs and their relationship to instance types
and regions can be found in the AWS EC2 Documentation
[for Linux](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/finding-an-ami.html)
or [for Windows](http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/finding-an-ami.html).
Further information on locating AMI IDs and their relationship to instance
types and regions can be found in the AWS EC2 Documentation [for
Linux](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/finding-an-ami.html)
or [for
Windows](http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/finding-an-ami.html).
## Accessing the Instance to Debug
@ -591,20 +613,24 @@ configuration of `launch_block_device_mappings` will expand the root volume
## Build template data
In configuration directives marked as a template engine above, the
following variables are available:
In configuration directives marked as a template engine above, the following
variables are available:
- `BuildRegion` - The region (for example `eu-central-1`) where Packer is building the AMI.
- `SourceAMI` - The source AMI ID (for example `ami-a2412fcd`) used to build the AMI.
- `SourceAMIName` - The source AMI Name (for example `ubuntu/images/ebs-ssd/ubuntu-xenial-16.04-amd64-server-20180306`) used to build the AMI.
- `SourceAMITags` - The source AMI Tags, as a `map[string]string` object.
- `BuildRegion` - The region (for example `eu-central-1`) where Packer is
building the AMI.
- `SourceAMI` - The source AMI ID (for example `ami-a2412fcd`) used to build
the AMI.
- `SourceAMIName` - The source AMI Name (for example
`ubuntu/images/ebs-ssd/ubuntu-xenial-16.04-amd64-server-20180306`) used to
build the AMI.
- `SourceAMITags` - The source AMI Tags, as a `map[string]string` object.
## Tag Example
Here is an example using the optional AMI tags. This will add the tags
`OS_Version` and `Release` to the finished AMI. As before, you will need to
provide your access keys, and may need to change the source AMI ID based on what
images exist when this template is run:
provide your access keys, and may need to change the source AMI ID based on
what images exist when this template is run:
``` json
{
@ -627,18 +653,18 @@ images exist when this template is run:
-&gt; **Note:** Packer uses pre-built AMIs as the source for building images.
These source AMIs may include volumes that are not flagged to be destroyed on
termination of the instance building the new image. Packer will attempt to clean
up all residual volumes that are not designated by the user to remain after
termination. If you need to preserve those source volumes, you can overwrite the
termination setting by specifying `delete_on_termination=false` in the
`launch_block_device_mappings` block for the device.
termination of the instance building the new image. Packer will attempt to
clean up all residual volumes that are not designated by the user to remain
after termination. If you need to preserve those source volumes, you can
overwrite the termination setting by specifying `delete_on_termination=false`
in the `launch_block_device_mappings` block for the device.
## Windows 2016 Sysprep Commands - For Amazon Windows AMIs Only
For Amazon Windows 2016 AMIs it is necessary to run Sysprep commands which can be easily added
to the provisioner section.
For Amazon Windows 2016 AMIs it is necessary to run Sysprep commands which can
be easily added to the provisioner section.
```json
``` json
{
"type": "powershell",
"inline": [

View File

@ -1,7 +1,7 @@
---
description: |
The amazon-ebssurrogate Packer builder is like the chroot builder, but does
not require running inside an EC2 instance.
The amazon-ebssurrogate Packer builder is like the chroot builder, but does not
require running inside an EC2 instance.
layout: docs
page_title: 'Amazon EBS Surrogate - Builders'
sidebar_current: 'docs-builders-amazon-ebssurrogate'
@ -19,8 +19,8 @@ then snapshotting and creating the AMI from that volume.
This builder can therefore be used to bootstrap scratch-build images - for
example FreeBSD or Ubuntu using ZFS as the root file system.
This is all done in your own AWS account. This builder will create temporary key
pairs, security group rules, etc., that provide it temporary access to the
This is all done in your own AWS account. This builder will create temporary
key pairs, security group rules, etc., that provide it temporary access to the
instance while the image is being created.
## Configuration Reference
@ -41,8 +41,8 @@ builder.
- `instance_type` (string) - The EC2 instance type to use while building the
AMI, such as `m1.small`.
- `region` (string) - The name of the region, such as `us-east-1`, in which to
launch the EC2 instance to create the AMI.
- `region` (string) - The name of the region, such as `us-east-1`, in which
to launch the EC2 instance to create the AMI.
- `secret_key` (string) - The secret key used to communicate with AWS. [Learn
how to set this](/docs/builders/amazon.html#specifying-amazon-credentials)
@ -51,38 +51,39 @@ builder.
created machine. `source_ami_filter` may be used instead to populate this
automatically.
- `ami_root_device` (block device mapping) - A block device mapping describing
the root device of the AMI. This looks like the mappings in `ami_block_device_mapping`,
except with an additional field:
- `ami_root_device` (block device mapping) - A block device mapping
describing the root device of the AMI. This looks like the mappings in
`ami_block_device_mapping`, except with an additional field:
- `source_device_name` (string) - The device name of the block device on the
source instance to be used as the root device for the AMI. This must correspond
to a block device in `launch_block_device_mapping`.
- `source_device_name` (string) - The device name of the block device on
the source instance to be used as the root device for the AMI. This
must correspond to a block device in `launch_block_device_mapping`.
### Optional:
- `ami_block_device_mappings` (array of block device mappings) - Add one or
more [block device mappings](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html)
more [block device
mappings](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html)
to the AMI. These will be attached when booting a new instance from your
AMI. To add a block device during the packer build see
`launch_block_device_mappings` below. Your options here may vary depending
on the type of VM you use. The block device mappings allow for the following
configuration:
on the type of VM you use. The block device mappings allow for the
following configuration:
- `delete_on_termination` (boolean) - Indicates whether the EBS volume is
deleted on instance termination. Default `false`. **NOTE**: If this
value is not explicitly set to `true` and volumes are not cleaned up by
an alternative method, additional volumes will accumulate after
every build.
an alternative method, additional volumes will accumulate after every
build.
- `device_name` (string) - The device name exposed to the instance (for
example, `/dev/sdh` or `xvdh`). Required for every device in the
block device mapping.
example, `/dev/sdh` or `xvdh`). Required for every device in the block
device mapping.
- `encrypted` (boolean) - Indicates whether or not to encrypt the volume.
- `iops` (number) - The number of I/O operations per second (IOPS) that the
volume supports. See the documentation on
- `iops` (number) - The number of I/O operations per second (IOPS) that
the volume supports. See the documentation on
[IOPs](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html)
for more information.
@ -91,22 +92,22 @@ builder.
- `snapshot_id` (string) - The ID of the snapshot.
- `virtual_name` (string) - The virtual device name. See the documentation on
[Block Device
- `virtual_name` (string) - The virtual device name. See the
documentation on [Block Device
Mapping](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html)
for more information.
- `volume_size` (number) - The size of the volume, in GiB. Required if not
specifying a `snapshot_id`.
- `volume_size` (number) - The size of the volume, in GiB. Required if
not specifying a `snapshot_id`.
- `volume_type` (string) - The volume type. (`gp2` for General Purpose (SSD)
volumes, `io1` for Provisioned IOPS (SSD) volumes, and `standard` for Magnetic
volumes)
- `volume_type` (string) - The volume type. (`gp2` for General Purpose
(SSD) volumes, `io1` for Provisioned IOPS (SSD) volumes, and `standard`
for Magnetic volumes)
- `ami_description` (string) - The description to set for the
resulting AMI(s). By default this description is empty. This is a
[template engine](/docs/templates/engine.html),
see [Build template data](#build-template-data) for more information.
- `ami_description` (string) - The description to set for the resulting
AMI(s). By default this description is empty. This is a [template
engine](/docs/templates/engine.html), see [Build template
data](#build-template-data) for more information.
- `ami_groups` (array of strings) - A list of groups that have access to
launch the resulting AMI(s). By default no groups have permission to launch
@ -114,8 +115,8 @@ builder.
accept any value other than `all`.
- `ami_product_codes` (array of strings) - A list of product codes to
associate with the AMI. By default no product codes are associated with
the AMI.
associate with the AMI. By default no product codes are associated with the
AMI.
- `ami_regions` (array of strings) - A list of regions to copy the AMI to.
Tags and attributes are copied along with the AMI. AMI copying takes time
@ -126,41 +127,45 @@ builder.
user creating the AMI has permissions to launch it.
- `ami_virtualization_type` (string) - The type of virtualization for the AMI
you are building. This option must match the supported virtualization
type of `source_ami`. Can be `paravirtual` or `hvm`.
you are building. This option must match the supported virtualization type
of `source_ami`. Can be `paravirtual` or `hvm`.
- `associate_public_ip_address` (boolean) - If using a non-default VPC, public
IP addresses are not provided by default. If this is toggled, your new
instance will get a Public IP.
- `associate_public_ip_address` (boolean) - If using a non-default VPC,
public IP addresses are not provided by default. If this is toggled, your
new instance will get a Public IP.
- `availability_zone` (string) - Destination availability zone to launch
instance in. Leave this empty to allow Amazon to auto-assign.
- `block_duration_minutes` (int64) - Requires `spot_price` to
be set. The required duration for the Spot Instances (also known as Spot blocks).
This value must be a multiple of 60 (60, 120, 180, 240, 300, or 360).
You can't specify an Availability Zone group or a launch group if you specify a duration.
- `block_duration_minutes` (int64) - Requires `spot_price` to be set. The
required duration for the Spot Instances (also known as Spot blocks). This
value must be a multiple of 60 (60, 120, 180, 240, 300, or 360). You can't
specify an Availability Zone group or a launch group if you specify a
duration.
- `custom_endpoint_ec2` (string) - This option is useful if you use a cloud
provider whose API is compatible with aws EC2. Specify another endpoint
like this `https://ec2.custom.endpoint.com`.
- `decode_authorization_messages` (boolean) - Enable automatic decoding of any
encoded authorization (error) messages using the `sts:DecodeAuthorizationMessage` API.
Note: requires that the effective user/role have permissions to `sts:DecodeAuthorizationMessage`
on resource `*`. Default `false`.
- `decode_authorization_messages` (boolean) - Enable automatic decoding of
any encoded authorization (error) messages using the
`sts:DecodeAuthorizationMessage` API. Note: requires that the effective
user/role have permissions to `sts:DecodeAuthorizationMessage` on resource
`*`. Default `false`.
- `disable_stop_instance` (boolean) - Packer normally stops the build instance
after all provisioners have run. For Windows instances, it is sometimes
desirable to [run Sysprep](http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ami-create-standard.html)
which will stop the instance for you. If this is set to true, Packer *will not*
stop the instance but will assume that you will send the stop signal
- `disable_stop_instance` (boolean) - Packer normally stops the build
instance after all provisioners have run. For Windows instances, it is
sometimes desirable to [run
Sysprep](http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ami-create-standard.html)
which will stop the instance for you. If this is set to true, Packer *will
not* stop the instance but will assume that you will send the stop signal
yourself through your final provisioner. You can do this with a
[windows-shell provisioner](https://www.packer.io/docs/provisioners/windows-shell.html).
[windows-shell
provisioner](https://www.packer.io/docs/provisioners/windows-shell.html).
Note that Packer will still wait for the instance to be stopped, and failing
to send the stop signal yourself, when you have set this flag to `true`,
will cause a timeout.
Note that Packer will still wait for the instance to be stopped, and
failing to send the stop signal yourself, when you have set this flag to
`true`, will cause a timeout.
Example of a valid shutdown command:
@ -175,25 +180,26 @@ builder.
Optimized](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html).
Default `false`.
- `ena_support` (boolean) - Enable enhanced networking (ENA but not SriovNetSupport)
on HVM-compatible AMIs. If set, add `ec2:ModifyInstanceAttribute` to your AWS IAM policy.
If false, this will disable enhanced networking in the final AMI as opposed to passing
the setting through unchanged from the source. Note: you must make sure enhanced
networking is enabled on your instance. See [Amazon's documentation on enabling enhanced
- `ena_support` (boolean) - Enable enhanced networking (ENA but not
SriovNetSupport) on HVM-compatible AMIs. If set, add
`ec2:ModifyInstanceAttribute` to your AWS IAM policy. If false, this will
disable enhanced networking in the final AMI as opposed to passing the
setting through unchanged from the source. Note: you must make sure
enhanced networking is enabled on your instance. See [Amazon's
documentation on enabling enhanced
networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking).
- `enable_t2_unlimited` (boolean) - Enabling T2 Unlimited allows the source
instance to burst additional CPU beyond its available [CPU Credits]
(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-credits-baseline-concepts.html)
for as long as the demand exists.
This is in contrast to the standard configuration that only allows an
instance to consume up to its available CPU Credits.
See the AWS documentation for [T2 Unlimited]
(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-unlimited.html)
instance to burst additional CPU beyond its available \[CPU Credits\]
(<https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-credits-baseline-concepts.html>)
for as long as the demand exists. This is in contrast to the standard
configuration that only allows an instance to consume up to its available
CPU Credits. See the AWS documentation for \[T2 Unlimited\]
(<https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-unlimited.html>)
and the **T2 Unlimited Pricing** section of the [Amazon EC2 On-Demand
Pricing](https://aws.amazon.com/ec2/pricing/on-demand/) document for more
information.
By default this option is disabled and Packer will set up a [T2
information. By default this option is disabled and Packer will set up a
[T2
Standard](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-std.html)
instance instead.
@ -203,27 +209,27 @@ builder.
Attempting to do so will cause an error.
!&gt; **Warning!** Additional costs may be incurred by enabling T2
Unlimited - even for instances that would usually qualify for the
[AWS Free Tier](https://aws.amazon.com/free/).
Unlimited - even for instances that would usually qualify for the [AWS Free
Tier](https://aws.amazon.com/free/).
- `encrypt_boot` (boolean) - Instruct packer to automatically create a copy of the
AMI with an encrypted boot volume (discarding the initial unencrypted AMI in the
process). Packer will always run this operation, even if the base
AMI has an encrypted boot volume to start with. Default `false`.
- `encrypt_boot` (boolean) - Instruct packer to automatically create a copy
of the AMI with an encrypted boot volume (discarding the initial
unencrypted AMI in the process). Packer will always run this operation,
even if the base AMI has an encrypted boot volume to start with. Default
`false`.
- `force_deregister` (boolean) - Force Packer to first deregister an existing
AMI if one with the same name already exists. Default `false`.
- `force_delete_snapshot` (boolean) - Force Packer to delete snapshots associated with
AMIs, which have been deregistered by `force_deregister`. Default `false`.
- `kms_key_id` (string) - ID, alias or ARN of the KMS key to use for boot volume encryption.
This only applies to the main `region`, other regions where the AMI will be copied
will be encrypted by the default EBS KMS key. For valid formats
see _KmsKeyId_ in the
[AWS API docs - CopyImage](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CopyImage.html).
- `force_delete_snapshot` (boolean) - Force Packer to delete snapshots
associated with AMIs, which have been deregistered by `force_deregister`.
Default `false`.
- `kms_key_id` (string) - ID, alias or ARN of the KMS key to use for boot
volume encryption. This only applies to the main `region`, other regions
where the AMI will be copied will be encrypted by the default EBS KMS key.
For valid formats see *KmsKeyId* in the [AWS API docs -
CopyImage](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CopyImage.html).
- `iam_instance_profile` (string) - The name of an [IAM instance
profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html)
@ -238,40 +244,42 @@ builder.
new AMI, the instance automatically launches with these additional volumes,
and will restore them from snapshots taken from the source instance.
- `mfa_code` (string) - The MFA [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm)
code. This should probably be a user variable since it changes all the time.
- `mfa_code` (string) - The MFA
[TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm)
code. This should probably be a user variable since it changes all the
time.
- `profile` (string) - The profile to use in the shared credentials file for
AWS. See Amazon's documentation on [specifying
profiles](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-profiles)
for more details.
- `region_kms_key_ids` (map of strings) - a map of regions to copy the ami to,
along with the custom kms key id (alias or arn) to use for encryption for that region.
Keys must match the regions provided in `ami_regions`. If you just want to
encrypt using a default ID, you can stick with `kms_key_id` and `ami_regions`.
If you want a region to be encrypted with that region's default key ID, you can
use an empty string `""` instead of a key id in this map. (e.g. `"us-east-1": ""`)
However, you cannot use default key IDs if you are using this in conjunction with
`snapshot_users` -- in that situation you must use custom keys. For valid formats
see _KmsKeyId_ in the
[AWS API docs - CopyImage](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CopyImage.html).
- `region_kms_key_ids` (map of strings) - a map of regions to copy the ami
to, along with the custom kms key id (alias or arn) to use for encryption
for that region. Keys must match the regions provided in `ami_regions`. If
you just want to encrypt using a default ID, you can stick with
`kms_key_id` and `ami_regions`. If you want a region to be encrypted with
that region's default key ID, you can use an empty string `""` instead of a
key id in this map. (e.g. `"us-east-1": ""`) However, you cannot use
default key IDs if you are using this in conjunction with `snapshot_users`
-- in that situation you must use custom keys. For valid formats see
*KmsKeyId* in the [AWS API docs -
CopyImage](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CopyImage.html).
- `run_tags` (object of key/value strings) - Tags to apply to the instance
that is *launched* to create the AMI. These tags are *not* applied to the
resulting AMI unless they're duplicated in `tags`. This is a
[template engine](/docs/templates/engine.html),
see [Build template data](#build-template-data) for more information.
resulting AMI unless they're duplicated in `tags`. This is a [template
engine](/docs/templates/engine.html), see [Build template
data](#build-template-data) for more information.
- `run_volume_tags` (object of key/value strings) - Tags to apply to the volumes
that are *launched* to create the AMI. These tags are *not* applied to the
resulting AMI unless they're duplicated in `tags`. This is a
[template engine](/docs/templates/engine.html),
see [Build template data](#build-template-data) for more information.
- `run_volume_tags` (object of key/value strings) - Tags to apply to the
volumes that are *launched* to create the AMI. These tags are *not* applied
to the resulting AMI unless they're duplicated in `tags`. This is a
[template engine](/docs/templates/engine.html), see [Build template
data](#build-template-data) for more information.
- `security_group_id` (string) - The ID (*not* the name) of the security group
to assign to the instance. By default this is not set and Packer will
- `security_group_id` (string) - The ID (*not* the name) of the security
group to assign to the instance. By default this is not set and Packer will
automatically create a new temporary security group to allow SSH access.
Note that if this is specified, you must be sure the security group allows
access to the `ssh_port` given below.
@ -280,8 +288,8 @@ builder.
described above. Note that if this is specified, you must omit the
`security_group_id`.
- `security_group_filter` (object) - Filters used to populate the `security_group_ids` field.
Example:
- `security_group_filter` (object) - Filters used to populate the
`security_group_ids` field. Example:
``` json
{
@ -295,34 +303,37 @@ builder.
This selects the SG's with tag `Class` with the value `packer`.
- `filters` (map of strings) - filters used to select a `security_group_ids`.
Any filter described in the docs for [DescribeSecurityGroups](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSecurityGroups.html)
- `filters` (map of strings) - filters used to select a
`security_group_ids`. Any filter described in the docs for
[DescribeSecurityGroups](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSecurityGroups.html)
is valid.
`security_group_ids` take precedence over this.
- `shutdown_behavior` (string) - Automatically terminate instances on shutdown
incase packer exits ungracefully. Possible values are "stop" and "terminate",
default is `stop`.
- `shutdown_behavior` (string) - Automatically terminate instances on
shutdown incase packer exits ungracefully. Possible values are "stop" and
"terminate", default is `stop`.
- `skip_region_validation` (boolean) - Set to true if you want to skip
validation of the region configuration option. Default `false`.
- `snapshot_groups` (array of strings) - A list of groups that have access to
create volumes from the snapshot(s). By default no groups have permission to create
volumes from the snapshot(s). `all` will make the snapshot publicly accessible.
create volumes from the snapshot(s). By default no groups have permission
to create volumes from the snapshot(s). `all` will make the snapshot
publicly accessible.
- `snapshot_users` (array of strings) - A list of account IDs that have access to
create volumes from the snapshot(s). By default no additional users other than the
user creating the AMI has permissions to create volumes from the backing snapshot(s).
- `snapshot_users` (array of strings) - A list of account IDs that have
access to create volumes from the snapshot(s). By default no additional
users other than the user creating the AMI has permissions to create
volumes from the backing snapshot(s).
- `snapshot_tags` (object of key/value strings) - Tags to apply to snapshot.
They will override AMI tags if already applied to snapshot. This is a
[template engine](/docs/templates/engine.html),
see [Build template data](#build-template-data) for more information.
[template engine](/docs/templates/engine.html), see [Build template
data](#build-template-data) for more information.
- `source_ami_filter` (object) - Filters used to populate the `source_ami` field.
Example:
- `source_ami_filter` (object) - Filters used to populate the `source_ami`
field. Example:
``` json
{
@ -338,83 +349,85 @@ builder.
}
```
This selects the most recent Ubuntu 16.04 HVM EBS AMI from Canonical.
NOTE: This will fail unless *exactly* one AMI is returned. In the above
example, `most_recent` will cause this to succeed by selecting the newest image.
This selects the most recent Ubuntu 16.04 HVM EBS AMI from Canonical. NOTE:
This will fail unless *exactly* one AMI is returned. In the above example,
`most_recent` will cause this to succeed by selecting the newest image.
- `filters` (map of strings) - filters used to select a `source_ami`.
NOTE: This will fail unless *exactly* one AMI is returned.
Any filter described in the docs for [DescribeImages](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeImages.html)
NOTE: This will fail unless *exactly* one AMI is returned. Any filter
described in the docs for
[DescribeImages](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeImages.html)
is valid.
- `owners` (array of strings) - Filters the images by their owner. You may
specify one or more AWS account IDs, `self` (which will use the account
whose credentials you are using to run Packer), or an AWS owner alias:
for example, `amazon`, `aws-marketplace`, or `microsoft`.
This option is required for security reasons.
- `owners` (array of strings) - Filters the images by their owner. You
may specify one or more AWS account IDs, `self` (which will use the
account whose credentials you are using to run Packer), or an AWS owner
alias: for example, `amazon`, `aws-marketplace`, or `microsoft`. This
option is required for security reasons.
- `most_recent` (boolean) - Selects the newest created image when true.
This is most useful for selecting a daily distro build.
You may set this in place of `source_ami` or in conjunction with it. If you
set this in conjunction with `source_ami`, the `source_ami` will be added to
the filter. The provided `source_ami` must meet all of the filtering criteria
provided in `source_ami_filter`; this pins the AMI returned by the filter,
but will cause Packer to fail if the `source_ami` does not exist.
set this in conjunction with `source_ami`, the `source_ami` will be added
to the filter. The provided `source_ami` must meet all of the filtering
criteria provided in `source_ami_filter`; this pins the AMI returned by the
filter, but will cause Packer to fail if the `source_ami` does not exist.
- `spot_price` (string) - The maximum hourly price to pay for a spot instance
to create the AMI. Spot instances are a type of instance that EC2 starts
when the current spot price is less than the maximum price you specify. Spot
price will be updated based on available spot instance capacity and current
spot instance requests. It may save you some costs. You can set this to
`auto` for Packer to automatically discover the best spot price or to "0"
to use an on demand instance (default).
when the current spot price is less than the maximum price you specify.
Spot price will be updated based on available spot instance capacity and
current spot instance requests. It may save you some costs. You can set
this to `auto` for Packer to automatically discover the best spot price or
to "0" to use an on demand instance (default).
- `spot_price_auto_product` (string) - Required if `spot_price` is set
to `auto`. This tells Packer what sort of AMI you're launching to find the
best spot price. This must be one of: `Linux/UNIX`, `SUSE Linux`, `Windows`,
`Linux/UNIX (Amazon VPC)`, `SUSE Linux (Amazon VPC)`, `Windows (Amazon VPC)`
- `spot_price_auto_product` (string) - Required if `spot_price` is set to
`auto`. This tells Packer what sort of AMI you're launching to find the
best spot price. This must be one of: `Linux/UNIX`, `SUSE Linux`,
`Windows`, `Linux/UNIX (Amazon VPC)`, `SUSE Linux (Amazon VPC)`,
`Windows (Amazon VPC)`
- `spot_tags` (object of key/value strings) - Requires `spot_price` to
be set. This tells Packer to apply tags to the spot request that is
issued.
- `spot_tags` (object of key/value strings) - Requires `spot_price` to be
set. This tells Packer to apply tags to the spot request that is issued.
- `sriov_support` (boolean) - Enable enhanced networking (SriovNetSupport but not ENA)
on HVM-compatible AMIs. If true, add `ec2:ModifyInstanceAttribute` to your AWS IAM
policy. Note: you must make sure enhanced networking is enabled on your instance. See [Amazon's
documentation on enabling enhanced networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking).
- `sriov_support` (boolean) - Enable enhanced networking (SriovNetSupport but
not ENA) on HVM-compatible AMIs. If true, add `ec2:ModifyInstanceAttribute`
to your AWS IAM policy. Note: you must make sure enhanced networking is
enabled on your instance. See [Amazon's documentation on enabling enhanced
networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking).
Default `false`.
- `ssh_keypair_name` (string) - If specified, this is the key that will be
used for SSH with the machine. The key must match a key pair name loaded
up into Amazon EC2. By default, this is blank, and Packer will
generate a temporary keypair unless
used for SSH with the machine. The key must match a key pair name loaded up
into Amazon EC2. By default, this is blank, and Packer will generate a
temporary keypair unless
[`ssh_password`](/docs/templates/communicator.html#ssh_password) is used.
[`ssh_private_key_file`](/docs/templates/communicator.html#ssh_private_key_file)
or `ssh_agent_auth` must be specified when `ssh_keypair_name` is utilized.
- `ssh_agent_auth` (boolean) - If true, the local SSH agent will be used to
authenticate connections to the source instance. No temporary keypair will
be created, and the values of `ssh_password` and `ssh_private_key_file` will
be ignored. To use this option with a key pair already configured in the source
AMI, leave the `ssh_keypair_name` blank. To associate an existing key pair
in AWS with the source instance, set the `ssh_keypair_name` field to the name
of the key pair.
be created, and the values of `ssh_password` and `ssh_private_key_file`
will be ignored. To use this option with a key pair already configured in
the source AMI, leave the `ssh_keypair_name` blank. To associate an
existing key pair in AWS with the source instance, set the
`ssh_keypair_name` field to the name of the key pair.
- `ssh_private_ip` (boolean) - No longer supported. See
[`ssh_interface`](#ssh_interface). A fixer exists to migrate.
- `ssh_interface` (string) - One of `public_ip`, `private_ip`,
`public_dns` or `private_dns`. If set, either the public IP address,
private IP address, public DNS name or private DNS name will used as the host for SSH.
The default behaviour if inside a VPC is to use the public IP address if available,
otherwise the private IP address will be used. If not in a VPC the public DNS name
will be used. Also works for WinRM.
- `ssh_interface` (string) - One of `public_ip`, `private_ip`, `public_dns`
or `private_dns`. If set, either the public IP address, private IP address,
public DNS name or private DNS name will used as the host for SSH. The
default behaviour if inside a VPC is to use the public IP address if
available, otherwise the private IP address will be used. If not in a VPC
the public DNS name will be used. Also works for WinRM.
Where Packer is configured for an outbound proxy but WinRM traffic should be direct,
`ssh_interface` must be set to `private_dns` and `<region>.compute.internal` included
in the `NO_PROXY` environment variable.
Where Packer is configured for an outbound proxy but WinRM traffic should
be direct, `ssh_interface` must be set to `private_dns` and
`<region>.compute.internal` included in the `NO_PROXY` environment
variable.
- `subnet_id` (string) - If using VPC, the ID of the subnet, such as
`subnet-12345def`, where Packer will launch the EC2 instance. This field is
@ -435,14 +448,15 @@ builder.
}
```
This selects the Subnet with tag `Class` with the value `build`, which has
the most free IP addresses.
NOTE: This will fail unless *exactly* one Subnet is returned. By using
`most_free` or `random` one will be selected from those matching the filter.
This selects the Subnet with tag `Class` with the value `build`, which has
the most free IP addresses. NOTE: This will fail unless *exactly* one
Subnet is returned. By using `most_free` or `random` one will be selected
from those matching the filter.
- `filters` (map of strings) - filters used to select a `subnet_id`.
NOTE: This will fail unless *exactly* one Subnet is returned.
Any filter described in the docs for [DescribeSubnets](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSubnets.html)
NOTE: This will fail unless *exactly* one Subnet is returned. Any
filter described in the docs for
[DescribeSubnets](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSubnets.html)
is valid.
- `most_free` (boolean) - The Subnet with the most free IPv4 addresses
@ -453,18 +467,18 @@ builder.
`subnet_id` take precedence over this.
- `tags` (object of key/value strings) - Tags applied to the AMI and
relevant snapshots. This is a
[template engine](/docs/templates/engine.html),
see [Build template data](#build-template-data) for more information.
- `tags` (object of key/value strings) - Tags applied to the AMI and relevant
snapshots. This is a [template engine](/docs/templates/engine.html), see
[Build template data](#build-template-data) for more information.
- `temporary_key_pair_name` (string) - The name of the temporary keypair
to generate. By default, Packer generates a name with a UUID.
- `temporary_key_pair_name` (string) - The name of the temporary keypair to
generate. By default, Packer generates a name with a UUID.
- `temporary_security_group_source_cidr` (string) - An IPv4 CIDR block to be authorized
access to the instance, when packer is creating a temporary security group.
The default is `0.0.0.0/0` (i.e., allow any IPv4 source). This is only used
when `security_group_id` or `security_group_ids` is not specified.
- `temporary_security_group_source_cidr` (string) - An IPv4 CIDR block to be
authorized access to the instance, when packer is creating a temporary
security group. The default is `0.0.0.0/0` (i.e., allow any IPv4 source).
This is only used when `security_group_id` or `security_group_ids` is not
specified.
- `token` (string) - The access token to use. This is different from the
access key and secret key. If you're not sure what this is, then you
@ -479,9 +493,9 @@ builder.
data when launching the instance.
- `vpc_id` (string) - If launching into a VPC subnet, Packer needs the VPC ID
in order to create a temporary security group within the VPC. Requires `subnet_id`
to be set. If this field is left blank, Packer will try to get the VPC ID from the
`subnet_id`.
in order to create a temporary security group within the VPC. Requires
`subnet_id` to be set. If this field is left blank, Packer will try to get
the VPC ID from the `subnet_id`.
- `vpc_filter` (object) - Filters used to populate the `vpc_id` field.
Example:
@ -498,19 +512,21 @@ builder.
}
```
This selects the VPC with tag `Class` with the value `build`, which is not the
default VPC, and have a IPv4 CIDR block of `/24`.
NOTE: This will fail unless *exactly* one VPC is returned.
This selects the VPC with tag `Class` with the value `build`, which is not
the default VPC, and have a IPv4 CIDR block of `/24`. NOTE: This will fail
unless *exactly* one VPC is returned.
- `filters` (map of strings) - filters used to select a `vpc_id`.
NOTE: This will fail unless *exactly* one VPC is returned.
Any filter described in the docs for [DescribeVpcs](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVpcs.html)
- `filters` (map of strings) - filters used to select a `vpc_id`. NOTE:
This will fail unless *exactly* one VPC is returned. Any filter
described in the docs for
[DescribeVpcs](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVpcs.html)
is valid.
`vpc_id` take precedence over this.
- `windows_password_timeout` (string) - The timeout for waiting for a Windows
password for Windows instances. Defaults to 20 minutes. Example value: `10m`
password for Windows instances. Defaults to 20 minutes. Example value:
`10m`
## Basic Example
@ -546,9 +562,10 @@ environmental variables. See the configuration reference in the section above
for more information on what environmental variables Packer will look for.
Further information on locating AMI IDs and their relationship to instance
types and regions can be found in the AWS EC2 Documentation
[for Linux](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/finding-an-ami.html)
or [for Windows](http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/finding-an-ami.html).
types and regions can be found in the AWS EC2 Documentation [for
Linux](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/finding-an-ami.html)
or [for
Windows](http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/finding-an-ami.html).
## Accessing the Instance to Debug
@ -559,17 +576,20 @@ You can use this information to access the instance as it is running.
## Build template data
In configuration directives marked as a template engine above, the
following variables are available:
- `BuildRegion` - The region (for example `eu-central-1`) where Packer is building the AMI.
- `SourceAMI` - The source AMI ID (for example `ami-a2412fcd`) used to build the AMI.
- `SourceAMIName` - The source AMI Name (for example `ubuntu/images/ebs-ssd/ubuntu-xenial-16.04-amd64-server-20180306`) used to build the AMI.
- `SourceAMITags` - The source AMI Tags, as a `map[string]string` object.
In configuration directives marked as a template engine above, the following
variables are available:
- `BuildRegion` - The region (for example `eu-central-1`) where Packer is
building the AMI.
- `SourceAMI` - The source AMI ID (for example `ami-a2412fcd`) used to build
the AMI.
- `SourceAMIName` - The source AMI Name (for example
`ubuntu/images/ebs-ssd/ubuntu-xenial-16.04-amd64-server-20180306`) used to
build the AMI.
- `SourceAMITags` - The source AMI Tags, as a `map[string]string` object.
-&gt; **Note:** Packer uses pre-built AMIs as the source for building images.
These source AMIs may include volumes that are not flagged to be destroyed on
termination of the instance building the new image. In addition to those volumes
created by this builder, any volumes inn the source AMI which are not marked for
deletion on termination will remain in your account.
termination of the instance building the new image. In addition to those
volumes created by this builder, any volumes inn the source AMI which are not
marked for deletion on termination will remain in your account.

View File

@ -1,7 +1,7 @@
---
description: |
The amazon-ebsvolume Packer builder is like the EBS builder, but is intended
to create EBS volumes rather than a machine image.
The amazon-ebsvolume Packer builder is like the EBS builder, but is intended to
create EBS volumes rather than a machine image.
layout: docs
page_title: 'Amazon EBS Volume - Builders'
sidebar_current: 'docs-builders-amazon-ebsvolume'
@ -22,12 +22,12 @@ This is all done in your own AWS account. The builder will create temporary key
pairs, security group rules, etc. that provide it temporary access to the
instance while the image is being created.
The builder does *not* manage EBS Volumes. Once it creates volumes and stores it
in your account, it is up to you to use, delete, etc. the volumes.
The builder does *not* manage EBS Volumes. Once it creates volumes and stores
it in your account, it is up to you to use, delete, etc. the volumes.
-&gt; **Note:** Temporary resources are, by default, all created with the prefix
`packer`. This can be useful if you want to restrict the security groups and
key pairs Packer is able to operate on.
-&gt; **Note:** Temporary resources are, by default, all created with the
prefix `packer`. This can be useful if you want to restrict the security groups
and key pairs Packer is able to operate on.
## Configuration Reference
@ -47,8 +47,8 @@ builder.
- `instance_type` (string) - The EC2 instance type to use while building the
AMI, such as `m1.small`.
- `region` (string) - The name of the region, such as `us-east-1`, in which to
launch the EC2 instance to create the AMI.
- `region` (string) - The name of the region, such as `us-east-1`, in which
to launch the EC2 instance to create the AMI.
- `secret_key` (string) - The secret key used to communicate with AWS. [Learn
how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials)
@ -59,12 +59,12 @@ builder.
### Optional:
- `ebs_volumes` (array of block device mappings) - Add the block
device mappings to the AMI. The block device mappings allow for keys:
- `ebs_volumes` (array of block device mappings) - Add the block device
mappings to the AMI. The block device mappings allow for keys:
- `device_name` (string) - The device name exposed to the instance (for
example, `/dev/sdh` or `xvdh`). Required for every device in the
block device mapping.
example, `/dev/sdh` or `xvdh`). Required for every device in the block
device mapping.
- `delete_on_termination` (boolean) - Indicates whether the EBS volume is
deleted on instance termination.
@ -72,13 +72,12 @@ builder.
- `encrypted` (boolean) - Indicates whether to encrypt the volume or not
- `kms_key_id` (string) - The ARN for the KMS encryption key. When
specifying `kms_key_id`, `encrypted` needs to be set to `true`. For valid formats
see _KmsKeyId_ in the
[AWS API docs - CopyImage](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CopyImage.html).
specifying `kms_key_id`, `encrypted` needs to be set to `true`. For
valid formats see *KmsKeyId* in the [AWS API docs -
CopyImage](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CopyImage.html).
- `iops` (number) - The number of I/O operations per second (IOPS) that the
volume supports. See the documentation on
- `iops` (number) - The number of I/O operations per second (IOPS) that
the volume supports. See the documentation on
[IOPs](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html)
for more information
@ -87,50 +86,53 @@ builder.
- `snapshot_id` (string) - The ID of the snapshot
- `virtual_name` (string) - The virtual device name. See the documentation on
[Block Device
- `virtual_name` (string) - The virtual device name. See the
documentation on [Block Device
Mapping](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html)
for more information
- `volume_size` (number) - The size of the volume, in GiB. Required if not
specifying a `snapshot_id`
- `volume_size` (number) - The size of the volume, in GiB. Required if
not specifying a `snapshot_id`
- `volume_type` (string) - The volume type. `gp2` for General Purpose (SSD)
volumes, `io1` for Provisioned IOPS (SSD) volumes, and `standard` for Magnetic
volumes
- `volume_type` (string) - The volume type. `gp2` for General Purpose
(SSD) volumes, `io1` for Provisioned IOPS (SSD) volumes, and `standard`
for Magnetic volumes
- `tags` (map) - Tags to apply to the volume. These are retained after the
builder completes. This is a
[template engine](/docs/templates/engine.html),
see [Build template data](#build-template-data) for more information.
- `tags` (map) - Tags to apply to the volume. These are retained after
the builder completes. This is a [template
engine](/docs/templates/engine.html), see [Build template
data](#build-template-data) for more information.
- `associate_public_ip_address` (boolean) - If using a non-default VPC, public
IP addresses are not provided by default. If this is toggled, your new
instance will get a Public IP.
- `associate_public_ip_address` (boolean) - If using a non-default VPC,
public IP addresses are not provided by default. If this is toggled, your
new instance will get a Public IP.
- `availability_zone` (string) - Destination availability zone to launch
instance in. Leave this empty to allow Amazon to auto-assign.
- `block_duration_minutes` (int64) - Requires `spot_price` to
be set. The required duration for the Spot Instances (also known as Spot blocks).
This value must be a multiple of 60 (60, 120, 180, 240, 300, or 360).
You can't specify an Availability Zone group or a launch group if you specify a duration.
- `block_duration_minutes` (int64) - Requires `spot_price` to be set. The
required duration for the Spot Instances (also known as Spot blocks). This
value must be a multiple of 60 (60, 120, 180, 240, 300, or 360). You can't
specify an Availability Zone group or a launch group if you specify a
duration.
- `custom_endpoint_ec2` (string) - This option is useful if you use a cloud
provider whose API is compatible with aws EC2. Specify another endpoint
like this `https://ec2.custom.endpoint.com`.
- `disable_stop_instance` (boolean) - Packer normally stops the build instance
after all provisioners have run. For Windows instances, it is sometimes
desirable to [run Sysprep](http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ami-create-standard.html)
which will stop the instance for you. If this is set to true, Packer *will not*
stop the instance but will assume that you will send the stop signal
- `disable_stop_instance` (boolean) - Packer normally stops the build
instance after all provisioners have run. For Windows instances, it is
sometimes desirable to [run
Sysprep](http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ami-create-standard.html)
which will stop the instance for you. If this is set to true, Packer *will
not* stop the instance but will assume that you will send the stop signal
yourself through your final provisioner. You can do this with a
[windows-shell provisioner](https://www.packer.io/docs/provisioners/windows-shell.html).
[windows-shell
provisioner](https://www.packer.io/docs/provisioners/windows-shell.html).
Note that Packer will still wait for the instance to be stopped, and failing
to send the stop signal yourself, when you have set this flag to `true`,
will cause a timeout.
Note that Packer will still wait for the instance to be stopped, and
failing to send the stop signal yourself, when you have set this flag to
`true`, will cause a timeout.
Example of a valid shutdown command:
@ -141,52 +143,56 @@ builder.
}
```
- `decode_authorization_messages` (boolean) - Enable automatic decoding of any
encoded authorization (error) messages using the `sts:DecodeAuthorizationMessage` API.
Note: requires that the effective user/role have permissions to `sts:DecodeAuthorizationMessage`
on resource `*`. Default `false`.
- `decode_authorization_messages` (boolean) - Enable automatic decoding of
any encoded authorization (error) messages using the
`sts:DecodeAuthorizationMessage` API. Note: requires that the effective
user/role have permissions to `sts:DecodeAuthorizationMessage` on resource
`*`. Default `false`.
- `ebs_optimized` (boolean) - Mark instance as [EBS
Optimized](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html).
Default `false`.
- `ena_support` (boolean) - Enable enhanced networking (ENA but not SriovNetSupport)
on HVM-compatible AMIs. If set, add `ec2:ModifyInstanceAttribute` to your AWS IAM policy.
If false, this will disable enhanced networking in the final AMI as opposed to passing
the setting through unchanged from the source. Note: you must make sure enhanced
networking is enabled on your instance. See [Amazon's documentation on enabling enhanced
- `ena_support` (boolean) - Enable enhanced networking (ENA but not
SriovNetSupport) on HVM-compatible AMIs. If set, add
`ec2:ModifyInstanceAttribute` to your AWS IAM policy. If false, this will
disable enhanced networking in the final AMI as opposed to passing the
setting through unchanged from the source. Note: you must make sure
enhanced networking is enabled on your instance. See [Amazon's
documentation on enabling enhanced
networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking).
- `enable_t2_unlimited` (boolean) - Enabling T2 Unlimited allows the source
instance to burst additional CPU beyond its available [CPU Credits]
(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-credits-baseline-concepts.html)
for as long as the demand exists.
This is in contrast to the standard configuration that only allows an
instance to consume up to its available CPU Credits.
See the AWS documentation for [T2 Unlimited]
(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-unlimited.html)
instance to burst additional CPU beyond its available \[CPU Credits\]
(<https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-credits-baseline-concepts.html>)
for as long as the demand exists. This is in contrast to the standard
configuration that only allows an instance to consume up to its available
CPU Credits. See the AWS documentation for \[T2 Unlimited\]
(<https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-unlimited.html>)
and the 'T2 Unlimited Pricing' section of the [Amazon EC2 On-Demand
Pricing](https://aws.amazon.com/ec2/pricing/on-demand/) document for more
information.
By default this option is disabled and Packer will set up a [T2
information. By default this option is disabled and Packer will set up a
[T2
Standard](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-std.html)
instance instead.
To use T2 Unlimited you must use a T2 instance type e.g. t2.micro.
Additionally, T2 Unlimited cannot be used in conjunction with Spot
Instances e.g. when the `spot_price` option has been configured.
Attempting to do so will cause an error.
Instances e.g. when the `spot_price` option has been configured. Attempting
to do so will cause an error.
!&gt; **Warning!** Additional costs may be incurred by enabling T2
Unlimited - even for instances that would usually qualify for the
[AWS Free Tier](https://aws.amazon.com/free/).
Unlimited - even for instances that would usually qualify for the [AWS Free
Tier](https://aws.amazon.com/free/).
- `iam_instance_profile` (string) - The name of an [IAM instance
profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html)
to launch the EC2 instance with.
- `mfa_code` (string) - The MFA [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm)
code. This should probably be a user variable since it changes all the time.
- `mfa_code` (string) - The MFA
[TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm)
code. This should probably be a user variable since it changes all the
time.
- `profile` (string) - The profile to use in the shared credentials file for
AWS. See Amazon's documentation on [specifying
@ -195,12 +201,12 @@ builder.
- `run_tags` (object of key/value strings) - Tags to apply to the instance
that is *launched* to create the AMI. These tags are *not* applied to the
resulting AMI unless they're duplicated in `tags`. This is a
[template engine](/docs/templates/engine.html),
see [Build template data](#build-template-data) for more information.
resulting AMI unless they're duplicated in `tags`. This is a [template
engine](/docs/templates/engine.html), see [Build template
data](#build-template-data) for more information.
- `security_group_id` (string) - The ID (*not* the name) of the security group
to assign to the instance. By default this is not set and Packer will
- `security_group_id` (string) - The ID (*not* the name) of the security
group to assign to the instance. By default this is not set and Packer will
automatically create a new temporary security group to allow SSH access.
Note that if this is specified, you must be sure the security group allows
access to the `ssh_port` given below.
@ -209,8 +215,8 @@ builder.
described above. Note that if this is specified, you must omit the
`security_group_id`.
- `security_group_filter` (object) - Filters used to populate the `security_group_ids` field.
Example:
- `security_group_filter` (object) - Filters used to populate the
`security_group_ids` field. Example:
``` json
{
@ -224,29 +230,32 @@ builder.
This selects the SG's with tag `Class` with the value `packer`.
- `filters` (map of strings) - filters used to select a `security_group_ids`.
Any filter described in the docs for [DescribeSecurityGroups](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSecurityGroups.html)
- `filters` (map of strings) - filters used to select a
`security_group_ids`. Any filter described in the docs for
[DescribeSecurityGroups](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSecurityGroups.html)
is valid.
`security_group_ids` take precedence over this.
- `shutdown_behavior` (string) - Automatically terminate instances on shutdown
in case Packer exits ungracefully. Possible values are `stop` and `terminate`.
Defaults to `stop`.
- `shutdown_behavior` (string) - Automatically terminate instances on
shutdown in case Packer exits ungracefully. Possible values are `stop` and
`terminate`. Defaults to `stop`.
- `skip_region_validation` (boolean) - Set to `true` if you want to skip
validation of the region configuration option. Defaults to `false`.
- `snapshot_groups` (array of strings) - A list of groups that have access to
create volumes from the snapshot(s). By default no groups have permission to create
volumes from the snapshot(s). `all` will make the snapshot publicly accessible.
create volumes from the snapshot(s). By default no groups have permission
to create volumes from the snapshot(s). `all` will make the snapshot
publicly accessible.
- `snapshot_users` (array of strings) - A list of account IDs that have access to
create volumes from the snapshot(s). By default no additional users other than the
user creating the AMI has permissions to create volumes from the backing snapshot(s).
- `snapshot_users` (array of strings) - A list of account IDs that have
access to create volumes from the snapshot(s). By default no additional
users other than the user creating the AMI has permissions to create
volumes from the backing snapshot(s).
- `source_ami_filter` (object) - Filters used to populate the `source_ami` field.
Example:
- `source_ami_filter` (object) - Filters used to populate the `source_ami`
field. Example:
``` json
{
@ -262,51 +271,53 @@ builder.
}
```
This selects the most recent Ubuntu 16.04 HVM EBS AMI from Canonical.
NOTE: This will fail unless *exactly* one AMI is returned. In the above
example, `most_recent` will cause this to succeed by selecting the newest image.
This selects the most recent Ubuntu 16.04 HVM EBS AMI from Canonical. NOTE:
This will fail unless *exactly* one AMI is returned. In the above example,
`most_recent` will cause this to succeed by selecting the newest image.
- `filters` (map of strings) - filters used to select a `source_ami`.
NOTE: This will fail unless *exactly* one AMI is returned.
Any filter described in the docs for [DescribeImages](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeImages.html)
NOTE: This will fail unless *exactly* one AMI is returned. Any filter
described in the docs for
[DescribeImages](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeImages.html)
is valid.
- `owners` (array of strings) - Filters the images by their owner. You may
specify one or more AWS account IDs, "self" (which will use the account
whose credentials you are using to run Packer), or an AWS owner alias:
for example, "amazon", "aws-marketplace", or "microsoft".
This option is required for security reasons.
- `owners` (array of strings) - Filters the images by their owner. You
may specify one or more AWS account IDs, "self" (which will use the
account whose credentials you are using to run Packer), or an AWS owner
alias: for example, "amazon", "aws-marketplace", or "microsoft". This
option is required for security reasons.
- `most_recent` (boolean) - Selects the newest created image when true.
This is most useful for selecting a daily distro build.
You may set this in place of `source_ami` or in conjunction with it. If you
set this in conjunction with `source_ami`, the `source_ami` will be added to
the filter. The provided `source_ami` must meet all of the filtering criteria
provided in `source_ami_filter`; this pins the AMI returned by the filter,
but will cause Packer to fail if the `source_ami` does not exist.
set this in conjunction with `source_ami`, the `source_ami` will be added
to the filter. The provided `source_ami` must meet all of the filtering
criteria provided in `source_ami_filter`; this pins the AMI returned by the
filter, but will cause Packer to fail if the `source_ami` does not exist.
- `spot_price` (string) - The maximum hourly price to pay for a spot instance
to create the AMI. Spot instances are a type of instance that EC2 starts
when the current spot price is less than the maximum price you specify. Spot
price will be updated based on available spot instance capacity and current
spot instance requests. It may save you some costs. You can set this to
`auto` for Packer to automatically discover the best spot price or to `0`
to use an on-demand instance (default).
when the current spot price is less than the maximum price you specify.
Spot price will be updated based on available spot instance capacity and
current spot instance requests. It may save you some costs. You can set
this to `auto` for Packer to automatically discover the best spot price or
to `0` to use an on-demand instance (default).
- `spot_price_auto_product` (string) - Required if `spot_price` is set
to `auto`. This tells Packer what sort of AMI you're launching to find the
best spot price. This must be one of: `Linux/UNIX`, `SUSE Linux`, `Windows`,
`Linux/UNIX (Amazon VPC)`, `SUSE Linux (Amazon VPC)` or `Windows (Amazon VPC)`
- `spot_price_auto_product` (string) - Required if `spot_price` is set to
`auto`. This tells Packer what sort of AMI you're launching to find the
best spot price. This must be one of: `Linux/UNIX`, `SUSE Linux`,
`Windows`, `Linux/UNIX (Amazon VPC)`, `SUSE Linux (Amazon VPC)` or
`Windows (Amazon VPC)`
- `spot_tags` (object of key/value strings) - Requires `spot_price` to
be set. This tells Packer to apply tags to the spot request that is
issued.
- `spot_tags` (object of key/value strings) - Requires `spot_price` to be
set. This tells Packer to apply tags to the spot request that is issued.
- `sriov_support` (boolean) - Enable enhanced networking (SriovNetSupport but not ENA)
on HVM-compatible AMIs. If true, add `ec2:ModifyInstanceAttribute` to your AWS IAM
policy. Note: you must make sure enhanced networking is enabled on your instance. See [Amazon's
documentation on enabling enhanced networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking).
- `sriov_support` (boolean) - Enable enhanced networking (SriovNetSupport but
not ENA) on HVM-compatible AMIs. If true, add `ec2:ModifyInstanceAttribute`
to your AWS IAM policy. Note: you must make sure enhanced networking is
enabled on your instance. See [Amazon's documentation on enabling enhanced
networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking).
Default `false`.
- `ssh_keypair_name` (string) - If specified, this is the key that will be
@ -319,16 +330,17 @@ builder.
- `ssh_private_ip` (boolean) - No longer supported. See
[`ssh_interface`](#ssh_interface). A fixer exists to migrate.
- `ssh_interface` (string) - One of `public_ip`, `private_ip`,
`public_dns` or `private_dns`. If set, either the public IP address,
private IP address, public DNS name or private DNS name will used as the host for SSH.
The default behaviour if inside a VPC is to use the public IP address if available,
otherwise the private IP address will be used. If not in a VPC the public DNS name
will be used. Also works for WinRM.
- `ssh_interface` (string) - One of `public_ip`, `private_ip`, `public_dns`
or `private_dns`. If set, either the public IP address, private IP address,
public DNS name or private DNS name will used as the host for SSH. The
default behaviour if inside a VPC is to use the public IP address if
available, otherwise the private IP address will be used. If not in a VPC
the public DNS name will be used. Also works for WinRM.
Where Packer is configured for an outbound proxy but WinRM traffic should be direct,
`ssh_interface` must be set to `private_dns` and `<region>.compute.internal` included
in the `NO_PROXY` environment variable.
Where Packer is configured for an outbound proxy but WinRM traffic should
be direct, `ssh_interface` must be set to `private_dns` and
`<region>.compute.internal` included in the `NO_PROXY` environment
variable.
- `subnet_id` (string) - If using VPC, the ID of the subnet, such as
`subnet-12345def`, where Packer will launch the EC2 instance. This field is
@ -349,14 +361,15 @@ builder.
}
```
This selects the Subnet with tag `Class` with the value `build`, which has
the most free IP addresses.
NOTE: This will fail unless *exactly* one Subnet is returned. By using
`most_free` or `random` one will be selected from those matching the filter.
This selects the Subnet with tag `Class` with the value `build`, which has
the most free IP addresses. NOTE: This will fail unless *exactly* one
Subnet is returned. By using `most_free` or `random` one will be selected
from those matching the filter.
- `filters` (map of strings) - filters used to select a `subnet_id`.
NOTE: This will fail unless *exactly* one Subnet is returned.
Any filter described in the docs for [DescribeSubnets](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSubnets.html)
NOTE: This will fail unless *exactly* one Subnet is returned. Any
filter described in the docs for
[DescribeSubnets](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSubnets.html)
is valid.
- `most_free` (boolean) - The Subnet with the most free IPv4 addresses
@ -367,14 +380,15 @@ builder.
`subnet_id` take precedence over this.
- `temporary_key_pair_name` (string) - The name of the temporary key pair
to generate. By default, Packer generates a name that looks like
- `temporary_key_pair_name` (string) - The name of the temporary key pair to
generate. By default, Packer generates a name that looks like
`packer_<UUID>`, where &lt;UUID&gt; is a 36 character unique identifier.
- `temporary_security_group_source_cidr` (string) - An IPv4 CIDR block to be authorized
access to the instance, when packer is creating a temporary security group.
The default is `0.0.0.0/0` (i.e., allow any IPv4 source). This is only used
when `security_group_id` or `security_group_ids` is not specified.
- `temporary_security_group_source_cidr` (string) - An IPv4 CIDR block to be
authorized access to the instance, when packer is creating a temporary
security group. The default is `0.0.0.0/0` (i.e., allow any IPv4 source).
This is only used when `security_group_id` or `security_group_ids` is not
specified.
- `token` (string) - The access token to use. This is different from the
access key and secret key. If you're not sure what this is, then you
@ -389,9 +403,9 @@ builder.
data when launching the instance.
- `vpc_id` (string) - If launching into a VPC subnet, Packer needs the VPC ID
in order to create a temporary security group within the VPC. Requires `subnet_id`
to be set. If this field is left blank, Packer will try to get the VPC ID from the
`subnet_id`.
in order to create a temporary security group within the VPC. Requires
`subnet_id` to be set. If this field is left blank, Packer will try to get
the VPC ID from the `subnet_id`.
- `vpc_filter` (object) - Filters used to populate the `vpc_id` field.
Example:
@ -408,19 +422,21 @@ builder.
}
```
This selects the VPC with tag `Class` with the value `build`, which is not the
default VPC, and have a IPv4 CIDR block of `/24`.
NOTE: This will fail unless *exactly* one VPC is returned.
This selects the VPC with tag `Class` with the value `build`, which is not
the default VPC, and have a IPv4 CIDR block of `/24`. NOTE: This will fail
unless *exactly* one VPC is returned.
- `filters` (map of strings) - filters used to select a `vpc_id`.
NOTE: This will fail unless *exactly* one VPC is returned.
Any filter described in the docs for [DescribeVpcs](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVpcs.html)
- `filters` (map of strings) - filters used to select a `vpc_id`. NOTE:
This will fail unless *exactly* one VPC is returned. Any filter
described in the docs for
[DescribeVpcs](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVpcs.html)
is valid.
`vpc_id` take precedence over this.
- `windows_password_timeout` (string) - The timeout for waiting for a Windows
password for Windows instances. Defaults to 20 minutes. Example value: `10m`
password for Windows instances. Defaults to 20 minutes. Example value:
`10m`
## Basic Example
@ -473,9 +489,10 @@ environmental variables. See the configuration reference in the section above
for more information on what environmental variables Packer will look for.
Further information on locating AMI IDs and their relationship to instance
types and regions can be found in the AWS EC2 Documentation
[for Linux](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/finding-an-ami.html)
or [for Windows](http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/finding-an-ami.html).
types and regions can be found in the AWS EC2 Documentation [for
Linux](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/finding-an-ami.html)
or [for
Windows](http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/finding-an-ami.html).
## Accessing the Instance to Debug
@ -486,16 +503,20 @@ You can use this information to access the instance as it is running.
## Build template data
In configuration directives marked as a template engine above, the
following variables are available:
In configuration directives marked as a template engine above, the following
variables are available:
- `BuildRegion` - The region (for example `eu-central-1`) where Packer is building the AMI.
- `SourceAMI` - The source AMI ID (for example `ami-a2412fcd`) used to build the AMI.
- `SourceAMIName` - The source AMI Name (for example `ubuntu/images/ebs-ssd/ubuntu-xenial-16.04-amd64-server-20180306`) used to build the AMI.
- `SourceAMITags` - The source AMI Tags, as a `map[string]string` object.
- `BuildRegion` - The region (for example `eu-central-1`) where Packer is
building the AMI.
- `SourceAMI` - The source AMI ID (for example `ami-a2412fcd`) used to build
the AMI.
- `SourceAMIName` - The source AMI Name (for example
`ubuntu/images/ebs-ssd/ubuntu-xenial-16.04-amd64-server-20180306`) used to
build the AMI.
- `SourceAMITags` - The source AMI Tags, as a `map[string]string` object.
-&gt; **Note:** Packer uses pre-built AMIs as the source for building images.
These source AMIs may include volumes that are not flagged to be destroyed on
termination of the instance building the new image. In addition to those volumes
created by this builder, any volumes inn the source AMI which are not marked for
deletion on termination will remain in your account.
termination of the instance building the new image. In addition to those
volumes created by this builder, any volumes inn the source AMI which are not
marked for deletion on termination will remain in your account.

View File

@ -2,8 +2,8 @@
description: |
The amazon-instance Packer builder is able to create Amazon AMIs backed by
instance storage as the root device. For more information on the difference
between instance storage and EBS-backed instances, see the storage for the
root device section in the EC2 documentation.
between instance storage and EBS-backed instances, see the storage for the root
device section in the EC2 documentation.
layout: docs
page_title: 'Amazon instance-store - Builders'
sidebar_current: 'docs-builders-amazon-instance'
@ -22,23 +22,24 @@ documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMI
This builder builds an AMI by launching an EC2 instance from an existing
instance-storage backed AMI, provisioning that running machine, and then
bundling and creating a new AMI from that machine. This is all done in your own
AWS account. This builder will create temporary key pairs, security group rules,
etc. that provide it temporary access to the instance while the image is being
created. This simplifies configuration quite a bit.
AWS account. This builder will create temporary key pairs, security group
rules, etc. that provide it temporary access to the instance while the image is
being created. This simplifies configuration quite a bit.
This builder does *not* manage AMIs. Once it creates an AMI and stores it in
your account, it is up to you to use, delete, etc. the AMI.
-&gt; **Note:** Temporary resources are, by default, all created with the prefix
`packer`. This can be useful if you want to restrict the security groups and
key pairs packer is able to operate on.
-&gt; **Note:** Temporary resources are, by default, all created with the
prefix `packer`. This can be useful if you want to restrict the security groups
and key pairs packer is able to operate on.
-&gt; **Note:** This builder requires that the [Amazon EC2 AMI
Tools](https://aws.amazon.com/developertools/368) are installed onto the
machine. This can be done within a provisioner, but must be done before the
builder finishes running.
~&gt; Instance builds are not supported for Windows. Use [`amazon-ebs`](amazon-ebs.html) instead.
~&gt; Instance builds are not supported for Windows. Use
[`amazon-ebs`](amazon-ebs.html) instead.
## Configuration Reference
@ -56,8 +57,8 @@ builder.
how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials)
- `account_id` (string) - Your AWS account ID. This is required for bundling
the AMI. This is *not the same* as the access key. You can find your account
ID in the security credentials page of your AWS account.
the AMI. This is *not the same* as the access key. You can find your
account ID in the security credentials page of your AWS account.
- `ami_name` (string) - The name of the resulting AMI that will appear when
managing AMIs in the AWS console or via APIs. This must be unique. To help
@ -67,8 +68,8 @@ builder.
- `instance_type` (string) - The EC2 instance type to use while building the
AMI, such as `m1.small`.
- `region` (string) - The name of the region, such as `us-east-1`, in which to
launch the EC2 instance to create the AMI.
- `region` (string) - The name of the region, such as `us-east-1`, in which
to launch the EC2 instance to create the AMI.
- `s3_bucket` (string) - The name of the S3 bucket to upload the AMI. This
bucket will be created if it doesn't exist.
@ -85,33 +86,34 @@ builder.
the AWS console.
- `x509_key_path` (string) - The local path to the private key for the X509
certificate specified by `x509_cert_path`. This is used for bundling
the AMI.
certificate specified by `x509_cert_path`. This is used for bundling the
AMI.
### Optional:
- `ami_block_device_mappings` (array of block device mappings) - Add one or
more [block device mappings](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html)
more [block device
mappings](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html)
to the AMI. These will be attached when booting a new instance from your
AMI. To add a block device during the Packer build see
`launch_block_device_mappings` below. Your options here may vary depending
on the type of VM you use. The block device mappings allow for the following
configuration:
on the type of VM you use. The block device mappings allow for the
following configuration:
- `delete_on_termination` (boolean) - Indicates whether the EBS volume is
deleted on instance termination. Default `false`. **NOTE**: If this
value is not explicitly set to `true` and volumes are not cleaned up by
an alternative method, additional volumes will accumulate after
every build.
an alternative method, additional volumes will accumulate after every
build.
- `device_name` (string) - The device name exposed to the instance (for
example, `/dev/sdh` or `xvdh`). Required for every device in the
block device mapping.
example, `/dev/sdh` or `xvdh`). Required for every device in the block
device mapping.
- `encrypted` (boolean) - Indicates whether to encrypt the volume or not
- `iops` (number) - The number of I/O operations per second (IOPS) that the
volume supports. See the documentation on
- `iops` (number) - The number of I/O operations per second (IOPS) that
the volume supports. See the documentation on
[IOPs](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html)
for more information
@ -120,22 +122,22 @@ builder.
- `snapshot_id` (string) - The ID of the snapshot
- `virtual_name` (string) - The virtual device name. See the documentation on
[Block Device
- `virtual_name` (string) - The virtual device name. See the
documentation on [Block Device
Mapping](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html)
for more information
- `volume_size` (number) - The size of the volume, in GiB. Required if not
specifying a `snapshot_id`
- `volume_size` (number) - The size of the volume, in GiB. Required if
not specifying a `snapshot_id`
- `volume_type` (string) - The volume type. `gp2` for General Purpose (SSD)
volumes, `io1` for Provisioned IOPS (SSD) volumes, and `standard` for Magnetic
volumes
- `volume_type` (string) - The volume type. `gp2` for General Purpose
(SSD) volumes, `io1` for Provisioned IOPS (SSD) volumes, and `standard`
for Magnetic volumes
- `ami_description` (string) - The description to set for the
resulting AMI(s). By default this description is empty. This is a
[template engine](/docs/templates/engine.html),
see [Build template data](#build-template-data) for more information.
- `ami_description` (string) - The description to set for the resulting
AMI(s). By default this description is empty. This is a [template
engine](/docs/templates/engine.html), see [Build template
data](#build-template-data) for more information.
- `ami_groups` (array of strings) - A list of groups that have access to
launch the resulting AMI(s). By default no groups have permission to launch
@ -143,8 +145,8 @@ builder.
accept any value other than `all`.
- `ami_product_codes` (array of strings) - A list of product codes to
associate with the AMI. By default no product codes are associated with
the AMI.
associate with the AMI. By default no product codes are associated with the
AMI.
- `ami_regions` (array of strings) - A list of regions to copy the AMI to.
Tags and attributes are copied along with the AMI. AMI copying takes time
@ -158,83 +160,87 @@ builder.
you are building. This option is required to register HVM images. Can be
`paravirtual` (default) or `hvm`.
- `associate_public_ip_address` (boolean) - If using a non-default VPC, public
IP addresses are not provided by default. If this is toggled, your new
instance will get a Public IP.
- `associate_public_ip_address` (boolean) - If using a non-default VPC,
public IP addresses are not provided by default. If this is toggled, your
new instance will get a Public IP.
- `availability_zone` (string) - Destination availability zone to launch
instance in. Leave this empty to allow Amazon to auto-assign.
- `block_duration_minutes` (int64) - Requires `spot_price` to
be set. The required duration for the Spot Instances (also known as Spot blocks).
This value must be a multiple of 60 (60, 120, 180, 240, 300, or 360).
You can't specify an Availability Zone group or a launch group if you specify a duration.
- `block_duration_minutes` (int64) - Requires `spot_price` to be set. The
required duration for the Spot Instances (also known as Spot blocks). This
value must be a multiple of 60 (60, 120, 180, 240, 300, or 360). You can't
specify an Availability Zone group or a launch group if you specify a
duration.
- `bundle_destination` (string) - The directory on the running instance where
the bundled AMI will be saved prior to uploading. By default this is `/tmp`.
This directory must exist and be writable.
the bundled AMI will be saved prior to uploading. By default this is
`/tmp`. This directory must exist and be writable.
- `bundle_prefix` (string) - The prefix for files created from bundling the
root volume. By default this is `image-{{timestamp}}`. The `timestamp`
variable should be used to make sure this is unique, otherwise it can
collide with other created AMIs by Packer in your account.
- `bundle_upload_command` (string) - The command to use to upload the
bundled volume. See the "custom bundle commands" section below for
more information.
- `bundle_upload_command` (string) - The command to use to upload the bundled
volume. See the "custom bundle commands" section below for more
information.
- `bundle_vol_command` (string) - The command to use to bundle the volume. See
the "custom bundle commands" section below for more information.
- `bundle_vol_command` (string) - The command to use to bundle the volume.
See the "custom bundle commands" section below for more information.
- `custom_endpoint_ec2` (string) - This option is useful if you use a cloud
provider whose API is compatible with aws EC2. Specify another endpoint
like this `https://ec2.custom.endpoint.com`.
- `decode_authorization_messages` (boolean) - Enable automatic decoding of any
encoded authorization (error) messages using the `sts:DecodeAuthorizationMessage` API.
Note: requires that the effective user/role have permissions to `sts:DecodeAuthorizationMessage`
on resource `*`. Default `false`.
- `decode_authorization_messages` (boolean) - Enable automatic decoding of
any encoded authorization (error) messages using the
`sts:DecodeAuthorizationMessage` API. Note: requires that the effective
user/role have permissions to `sts:DecodeAuthorizationMessage` on resource
`*`. Default `false`.
- `ebs_optimized` (boolean) - Mark instance as [EBS
Optimized](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html).
Default `false`.
- `ena_support` (boolean) - Enable enhanced networking (ENA but not SriovNetSupport)
on HVM-compatible AMIs. If set, add `ec2:ModifyInstanceAttribute` to your AWS IAM policy.
If false, this will disable enhanced networking in the final AMI as opposed to passing
the setting through unchanged from the source. Note: you must make sure enhanced
networking is enabled on your instance. See [Amazon's documentation on enabling enhanced
- `ena_support` (boolean) - Enable enhanced networking (ENA but not
SriovNetSupport) on HVM-compatible AMIs. If set, add
`ec2:ModifyInstanceAttribute` to your AWS IAM policy. If false, this will
disable enhanced networking in the final AMI as opposed to passing the
setting through unchanged from the source. Note: you must make sure
enhanced networking is enabled on your instance. See [Amazon's
documentation on enabling enhanced
networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking).
- `enable_t2_unlimited` (boolean) - Enabling T2 Unlimited allows the source
instance to burst additional CPU beyond its available [CPU Credits]
(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-credits-baseline-concepts.html)
for as long as the demand exists.
This is in contrast to the standard configuration that only allows an
instance to consume up to its available CPU Credits.
See the AWS documentation for [T2 Unlimited]
(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-unlimited.html)
instance to burst additional CPU beyond its available \[CPU Credits\]
(<https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-credits-baseline-concepts.html>)
for as long as the demand exists. This is in contrast to the standard
configuration that only allows an instance to consume up to its available
CPU Credits. See the AWS documentation for \[T2 Unlimited\]
(<https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-unlimited.html>)
and the 'T2 Unlimited Pricing' section of the [Amazon EC2 On-Demand
Pricing](https://aws.amazon.com/ec2/pricing/on-demand/) document for more
information.
By default this option is disabled and Packer will set up a [T2
information. By default this option is disabled and Packer will set up a
[T2
Standard](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-std.html)
instance instead.
To use T2 Unlimited you must use a T2 instance type e.g. t2.micro.
Additionally, T2 Unlimited cannot be used in conjunction with Spot
Instances e.g. when the `spot_price` option has been configured.
Attempting to do so will cause an error.
Instances e.g. when the `spot_price` option has been configured. Attempting
to do so will cause an error.
!&gt; **Warning!** Additional costs may be incurred by enabling T2
Unlimited - even for instances that would usually qualify for the
[AWS Free Tier](https://aws.amazon.com/free/).
Unlimited - even for instances that would usually qualify for the [AWS Free
Tier](https://aws.amazon.com/free/).
- `force_deregister` (boolean) - Force Packer to first deregister an existing
AMI if one with the same name already exists. Defaults to `false`.
- `force_delete_snapshot` (boolean) - Force Packer to delete snapshots associated with
AMIs, which have been deregistered by `force_deregister`. Defaults to `false`.
- `force_delete_snapshot` (boolean) - Force Packer to delete snapshots
associated with AMIs, which have been deregistered by `force_deregister`.
Defaults to `false`.
- `iam_instance_profile` (string) - The name of an [IAM instance
profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html)
@ -249,34 +255,36 @@ builder.
new AMI, the instance automatically launches with these additional volumes,
and will restore them from snapshots taken from the source instance.
- `mfa_code` (string) - The MFA [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm)
code. This should probably be a user variable since it changes all the time.
- `mfa_code` (string) - The MFA
[TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm)
code. This should probably be a user variable since it changes all the
time.
- `profile` (string) - The profile to use in the shared credentials file for
AWS. See Amazon's documentation on [specifying
profiles](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-profiles)
for more details.
- `region_kms_key_ids` (map of strings) - a map of regions to copy the ami to,
along with the custom kms key id (alias or arn) to use for encryption for that region.
Keys must match the regions provided in `ami_regions`. If you just want to
encrypt using a default ID, you can stick with `kms_key_id` and `ami_regions`.
If you want a region to be encrypted with that region's default key ID, you can
use an empty string `""` instead of a key id in this map. (e.g. `"us-east-1": ""`)
However, you cannot use default key IDs if you are using this in conjunction with
`snapshot_users` -- in that situation you must use custom keys. For valid formats
see _KmsKeyId_ in the
[AWS API docs - CopyImage](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CopyImage.html).
- `region_kms_key_ids` (map of strings) - a map of regions to copy the ami
to, along with the custom kms key id (alias or arn) to use for encryption
for that region. Keys must match the regions provided in `ami_regions`. If
you just want to encrypt using a default ID, you can stick with
`kms_key_id` and `ami_regions`. If you want a region to be encrypted with
that region's default key ID, you can use an empty string `""` instead of a
key id in this map. (e.g. `"us-east-1": ""`) However, you cannot use
default key IDs if you are using this in conjunction with `snapshot_users`
-- in that situation you must use custom keys. For valid formats see
*KmsKeyId* in the [AWS API docs -
CopyImage](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CopyImage.html).
- `run_tags` (object of key/value strings) - Tags to apply to the instance
that is *launched* to create the AMI. These tags are *not* applied to the
resulting AMI unless they're duplicated in `tags`. This is a
[template engine](/docs/templates/engine.html),
see [Build template data](#build-template-data) for more information.
resulting AMI unless they're duplicated in `tags`. This is a [template
engine](/docs/templates/engine.html), see [Build template
data](#build-template-data) for more information.
- `security_group_id` (string) - The ID (*not* the name) of the security group
to assign to the instance. By default this is not set and Packer will
- `security_group_id` (string) - The ID (*not* the name) of the security
group to assign to the instance. By default this is not set and Packer will
automatically create a new temporary security group to allow SSH access.
Note that if this is specified, you must be sure the security group allows
access to the `ssh_port` given below.
@ -285,8 +293,8 @@ builder.
described above. Note that if this is specified, you must omit the
`security_group_id`.
- `security_group_filter` (object) - Filters used to populate the `security_group_ids` field.
Example:
- `security_group_filter` (object) - Filters used to populate the
`security_group_ids` field. Example:
``` json
{
@ -300,8 +308,9 @@ builder.
This selects the SG's with tag `Class` with the value `packer`.
- `filters` (map of strings) - filters used to select a `security_group_ids`.
Any filter described in the docs for [DescribeSecurityGroups](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSecurityGroups.html)
- `filters` (map of strings) - filters used to select a
`security_group_ids`. Any filter described in the docs for
[DescribeSecurityGroups](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSecurityGroups.html)
is valid.
`security_group_ids` take precedence over this.
@ -310,15 +319,17 @@ builder.
validation of the region configuration option. Defaults to `false`.
- `snapshot_groups` (array of strings) - A list of groups that have access to
create volumes from the snapshot(s). By default no groups have permission to create
volumes form the snapshot(s). `all` will make the snapshot publicly accessible.
create volumes from the snapshot(s). By default no groups have permission
to create volumes form the snapshot(s). `all` will make the snapshot
publicly accessible.
- `snapshot_users` (array of strings) - A list of account IDs that have access to
create volumes from the snapshot(s). By default no additional users other than the
user creating the AMI has permissions to create volumes from the backing snapshot(s).
- `snapshot_users` (array of strings) - A list of account IDs that have
access to create volumes from the snapshot(s). By default no additional
users other than the user creating the AMI has permissions to create
volumes from the backing snapshot(s).
- `source_ami_filter` (object) - Filters used to populate the `source_ami` field.
Example:
- `source_ami_filter` (object) - Filters used to populate the `source_ami`
field. Example:
``` json
{
@ -334,29 +345,30 @@ builder.
}
```
This selects the most recent Ubuntu 16.04 HVM EBS AMI from Canonical.
NOTE: This will fail unless *exactly* one AMI is returned. In the above
example, `most_recent` will cause this to succeed by selecting the newest image.
This selects the most recent Ubuntu 16.04 HVM EBS AMI from Canonical. NOTE:
This will fail unless *exactly* one AMI is returned. In the above example,
`most_recent` will cause this to succeed by selecting the newest image.
- `filters` (map of strings) - filters used to select a `source_ami`.
NOTE: This will fail unless *exactly* one AMI is returned.
Any filter described in the docs for [DescribeImages](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeImages.html)
NOTE: This will fail unless *exactly* one AMI is returned. Any filter
described in the docs for
[DescribeImages](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeImages.html)
is valid.
- `owners` (array of strings) - Filters the images by their owner. You may
specify one or more AWS account IDs, "self" (which will use the account
whose credentials you are using to run Packer), or an AWS owner alias:
for example, "amazon", "aws-marketplace", or "microsoft".
This option is required for security reasons.
- `owners` (array of strings) - Filters the images by their owner. You
may specify one or more AWS account IDs, "self" (which will use the
account whose credentials you are using to run Packer), or an AWS owner
alias: for example, "amazon", "aws-marketplace", or "microsoft". This
option is required for security reasons.
- `most_recent` (boolean) - Selects the newest created image when true.
This is most useful for selecting a daily distro build.
You may set this in place of `source_ami` or in conjunction with it. If you
set this in conjunction with `source_ami`, the `source_ami` will be added to
the filter. The provided `source_ami` must meet all of the filtering criteria
provided in `source_ami_filter`; this pins the AMI returned by the filter,
but will cause Packer to fail if the `source_ami` does not exist.
set this in conjunction with `source_ami`, the `source_ami` will be added
to the filter. The provided `source_ami` must meet all of the filtering
criteria provided in `source_ami_filter`; this pins the AMI returned by the
filter, but will cause Packer to fail if the `source_ami` does not exist.
- `snapshot_tags` (object of key/value strings) - Tags to apply to snapshot.
They will override AMI tags if already applied to snapshot.
@ -369,50 +381,52 @@ builder.
for Packer to automatically discover the best spot price or to `0` to use
an on-demand instance (default).
- `spot_price_auto_product` (string) - Required if `spot_price` is set
to `auto`. This tells Packer what sort of AMI you're launching to find the
best spot price. This must be one of: `Linux/UNIX`, `SUSE Linux`, `Windows`,
`Linux/UNIX (Amazon VPC)`, `SUSE Linux (Amazon VPC)`, `Windows (Amazon VPC)`
- `spot_price_auto_product` (string) - Required if `spot_price` is set to
`auto`. This tells Packer what sort of AMI you're launching to find the
best spot price. This must be one of: `Linux/UNIX`, `SUSE Linux`,
`Windows`, `Linux/UNIX (Amazon VPC)`, `SUSE Linux (Amazon VPC)`,
`Windows (Amazon VPC)`
- `spot_tags` (object of key/value strings) - Requires `spot_price` to
be set. This tells Packer to apply tags to the spot request that is
issued.
- `spot_tags` (object of key/value strings) - Requires `spot_price` to be
set. This tells Packer to apply tags to the spot request that is issued.
- `sriov_support` (boolean) - Enable enhanced networking (SriovNetSupport but not ENA)
on HVM-compatible AMIs. If true, add `ec2:ModifyInstanceAttribute` to your AWS IAM
policy. Note: you must make sure enhanced networking is enabled on your instance. See [Amazon's
documentation on enabling enhanced networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking).
- `sriov_support` (boolean) - Enable enhanced networking (SriovNetSupport but
not ENA) on HVM-compatible AMIs. If true, add `ec2:ModifyInstanceAttribute`
to your AWS IAM policy. Note: you must make sure enhanced networking is
enabled on your instance. See [Amazon's documentation on enabling enhanced
networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking).
Default `false`.
- `ssh_keypair_name` (string) - If specified, this is the key that will be
used for SSH with the machine. The key must match a key pair name loaded
up into Amazon EC2. By default, this is blank, and Packer will
generate a temporary key pair unless
used for SSH with the machine. The key must match a key pair name loaded up
into Amazon EC2. By default, this is blank, and Packer will generate a
temporary key pair unless
[`ssh_password`](/docs/templates/communicator.html#ssh_password) is used.
[`ssh_private_key_file`](/docs/templates/communicator.html#ssh_private_key_file)
or `ssh_agent_auth` must be specified when `ssh_keypair_name` is utilized.
- `ssh_agent_auth` (boolean) - If true, the local SSH agent will be used to
authenticate connections to the source instance. No temporary key pair will
be created, and the values of `ssh_password` and `ssh_private_key_file` will
be ignored. To use this option with a key pair already configured in the source
AMI, leave the `ssh_keypair_name` blank. To associate an existing key pair
in AWS with the source instance, set the `ssh_keypair_name` field to the name
of the key pair.
be created, and the values of `ssh_password` and `ssh_private_key_file`
will be ignored. To use this option with a key pair already configured in
the source AMI, leave the `ssh_keypair_name` blank. To associate an
existing key pair in AWS with the source instance, set the
`ssh_keypair_name` field to the name of the key pair.
- `ssh_private_ip` (boolean) - No longer supported. See
[`ssh_interface`](#ssh_interface). A fixer exists to migrate.
- `ssh_interface` (string) - One of `public_ip`, `private_ip`,
`public_dns` or `private_dns`. If set, either the public IP address,
private IP address, public DNS name or private DNS name will used as the host for SSH.
The default behaviour if inside a VPC is to use the public IP address if available,
otherwise the private IP address will be used. If not in a VPC the public DNS name
will be used. Also works for WinRM.
- `ssh_interface` (string) - One of `public_ip`, `private_ip`, `public_dns`
or `private_dns`. If set, either the public IP address, private IP address,
public DNS name or private DNS name will used as the host for SSH. The
default behaviour if inside a VPC is to use the public IP address if
available, otherwise the private IP address will be used. If not in a VPC
the public DNS name will be used. Also works for WinRM.
Where Packer is configured for an outbound proxy but WinRM traffic should be direct,
`ssh_interface` must be set to `private_dns` and `<region>.compute.internal` included
in the `NO_PROXY` environment variable.
Where Packer is configured for an outbound proxy but WinRM traffic should
be direct, `ssh_interface` must be set to `private_dns` and
`<region>.compute.internal` included in the `NO_PROXY` environment
variable.
- `subnet_id` (string) - If using VPC, the ID of the subnet, such as
`subnet-12345def`, where Packer will launch the EC2 instance. This field is
@ -433,14 +447,15 @@ builder.
}
```
This selects the Subnet with tag `Class` with the value `build`, which has
the most free IP addresses.
NOTE: This will fail unless *exactly* one Subnet is returned. By using
`most_free` or `random` one will be selected from those matching the filter.
This selects the Subnet with tag `Class` with the value `build`, which has
the most free IP addresses. NOTE: This will fail unless *exactly* one
Subnet is returned. By using `most_free` or `random` one will be selected
from those matching the filter.
- `filters` (map of strings) - filters used to select a `subnet_id`.
NOTE: This will fail unless *exactly* one Subnet is returned.
Any filter described in the docs for [DescribeSubnets](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSubnets.html)
NOTE: This will fail unless *exactly* one Subnet is returned. Any
filter described in the docs for
[DescribeSubnets](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSubnets.html)
is valid.
- `most_free` (boolean) - The Subnet with the most free IPv4 addresses
@ -452,17 +467,18 @@ builder.
`subnet_id` take precedence over this.
- `tags` (object of key/value strings) - Tags applied to the AMI. This is a
[template engine](/docs/templates/engine.html),
see [Build template data](#build-template-data) for more information.
[template engine](/docs/templates/engine.html), see [Build template
data](#build-template-data) for more information.
- `temporary_key_pair_name` (string) - The name of the temporary key pair
to generate. By default, Packer generates a name that looks like
- `temporary_key_pair_name` (string) - The name of the temporary key pair to
generate. By default, Packer generates a name that looks like
`packer_<UUID>`, where &lt;UUID&gt; is a 36 character unique identifier.
- `temporary_security_group_source_cidr` (string) - An IPv4 CIDR block to be authorized
access to the instance, when packer is creating a temporary security group.
The default is `0.0.0.0/0` (i.e., allow any IPv4 source). This is only used
when `security_group_id` or `security_group_ids` is not specified.
- `temporary_security_group_source_cidr` (string) - An IPv4 CIDR block to be
authorized access to the instance, when packer is creating a temporary
security group. The default is `0.0.0.0/0` (i.e., allow any IPv4 source).
This is only used when `security_group_id` or `security_group_ids` is not
specified.
- `user_data` (string) - User data to apply when launching the instance. Note
that you need to be careful about escaping characters due to the templates
@ -472,9 +488,9 @@ builder.
data when launching the instance.
- `vpc_id` (string) - If launching into a VPC subnet, Packer needs the VPC ID
in order to create a temporary security group within the VPC. Requires `subnet_id`
to be set. If this field is left blank, Packer will try to get the VPC ID from the
`subnet_id`.
in order to create a temporary security group within the VPC. Requires
`subnet_id` to be set. If this field is left blank, Packer will try to get
the VPC ID from the `subnet_id`.
- `vpc_filter` (object) - Filters used to populate the `vpc_id` field.
Example:
@ -491,25 +507,27 @@ builder.
}
```
This selects the VPC with tag `Class` with the value `build`, which is not the
default VPC, and have a IPv4 CIDR block of `/24`.
NOTE: This will fail unless *exactly* one VPC is returned.
This selects the VPC with tag `Class` with the value `build`, which is not
the default VPC, and have a IPv4 CIDR block of `/24`. NOTE: This will fail
unless *exactly* one VPC is returned.
- `filters` (map of strings) - filters used to select a `vpc_id`.
NOTE: This will fail unless *exactly* one VPC is returned.
Any filter described in the docs for [DescribeVpcs](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVpcs.html)
- `filters` (map of strings) - filters used to select a `vpc_id`. NOTE:
This will fail unless *exactly* one VPC is returned. Any filter
described in the docs for
[DescribeVpcs](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVpcs.html)
is valid.
`vpc_id` take precedence over this.
- `x509_upload_path` (string) - The path on the remote machine where the X509
certificate will be uploaded. This path must already exist and be writable.
X509 certificates are uploaded after provisioning is run, so it is perfectly
okay to create this directory as part of the provisioning process. Defaults to
`/tmp`.
X509 certificates are uploaded after provisioning is run, so it is
perfectly okay to create this directory as part of the provisioning
process. Defaults to `/tmp`.
- `windows_password_timeout` (string) - The timeout for waiting for a Windows
password for Windows instances. Defaults to 20 minutes. Example value: `10m`
password for Windows instances. Defaults to 20 minutes. Example value:
`10m`
## Basic Example
@ -548,30 +566,34 @@ You can use this information to access the instance as it is running.
## Build template data
In configuration directives marked as a template engine above, the
following variables are available:
In configuration directives marked as a template engine above, the following
variables are available:
- `BuildRegion` - The region (for example `eu-central-1`) where Packer is building the AMI.
- `SourceAMI` - The source AMI ID (for example `ami-a2412fcd`) used to build the AMI.
- `SourceAMIName` - The source AMI Name (for example `ubuntu/images/ebs-ssd/ubuntu-xenial-16.04-amd64-server-20180306`) used to build the AMI.
- `SourceAMITags` - The source AMI Tags, as a `map[string]string` object.
- `BuildRegion` - The region (for example `eu-central-1`) where Packer is
building the AMI.
- `SourceAMI` - The source AMI ID (for example `ami-a2412fcd`) used to build
the AMI.
- `SourceAMIName` - The source AMI Name (for example
`ubuntu/images/ebs-ssd/ubuntu-xenial-16.04-amd64-server-20180306`) used to
build the AMI.
- `SourceAMITags` - The source AMI Tags, as a `map[string]string` object.
## Custom Bundle Commands
A lot of the process required for creating an instance-store backed AMI involves
commands being run on the actual source instance. Specifically, the
A lot of the process required for creating an instance-store backed AMI
involves commands being run on the actual source instance. Specifically, the
`ec2-bundle-vol` and `ec2-upload-bundle` commands must be used to bundle the
root filesystem and upload it, respectively.
Each of these commands have a lot of available flags. Instead of exposing each
possible flag as a template configuration option, the instance-store AMI builder
for Packer lets you customize the entire command used to bundle and upload the
AMI.
possible flag as a template configuration option, the instance-store AMI
builder for Packer lets you customize the entire command used to bundle and
upload the AMI.
These are configured with `bundle_vol_command` and `bundle_upload_command`. Both
of these configurations are [configuration
templates](/docs/templates/engine.html) and have support for
their own set of template variables.
These are configured with `bundle_vol_command` and `bundle_upload_command`.
Both of these configurations are [configuration
templates](/docs/templates/engine.html) and have support for their own set of
template variables.
### Bundle Volume Command
@ -603,10 +625,10 @@ include those files (see the `--no-filter` option of `ec2-bundle-vol`).
### Bundle Upload Command
The default value for `bundle_upload_command` is shown below. It is split across
multiple lines for convenience of reading. Access key and secret key are omitted
if using instance profile. The bundle upload command is responsible for taking
the bundled volume and uploading it to S3.
The default value for `bundle_upload_command` is shown below. It is split
across multiple lines for convenience of reading. Access key and secret key are
omitted if using instance profile. The bundle upload command is responsible for
taking the bundled volume and uploading it to S3.
``` text
sudo -i -n ec2-upload-bundle \
@ -623,14 +645,15 @@ sudo -i -n ec2-upload-bundle \
The available template variables should be self-explanatory based on the
parameters they're used to satisfy the `ec2-upload-bundle` command.
Additionally, `{{.Token}}` is available when overriding this command. You must
create your own bundle command with the addition of `-t {{.Token}} ` if you are
create your own bundle command with the addition of `-t {{.Token}}` if you are
assuming a role.
#### Bundle Upload Permissions
The `ec2-upload-bundle` requires a policy document that looks something like this:
The `ec2-upload-bundle` requires a policy document that looks something like
this:
```json
``` json
{
"Version": "2012-10-17",
"Statement": [

View File

@ -14,9 +14,9 @@ multiple builders depending on the strategy you want to use to build the AMI.
Packer supports the following builders at the moment:
- [amazon-ebs](/docs/builders/amazon-ebs.html) - Create EBS-backed AMIs by
launching a source AMI and re-packaging it into a new AMI
after provisioning. If in doubt, use this builder, which is the easiest to
get started with.
launching a source AMI and re-packaging it into a new AMI after
provisioning. If in doubt, use this builder, which is the easiest to get
started with.
- [amazon-instance](/docs/builders/amazon-instance.html) - Create
instance-store AMIs by launching and provisioning a source instance, then
@ -43,9 +43,9 @@ generally recommends EBS-backed images nowadays.
Packer is able to create Amazon EBS Volumes which are preinitialized with a
filesystem and data.
- [amazon-ebsvolume](/docs/builders/amazon-ebsvolume.html) - Create EBS volumes
by launching a source AMI with block devices mapped. Provision the instance,
then destroy it, retaining the EBS volumes.
- [amazon-ebsvolume](/docs/builders/amazon-ebsvolume.html) - Create EBS
volumes by launching a source AMI with block devices mapped. Provision the
instance, then destroy it, retaining the EBS volumes.
<span id="specifying-amazon-credentials"></span>
@ -65,7 +65,7 @@ explained below:
Static credentials can be provided in the form of an access key id and secret.
These look like:
```json
``` json
{
"access_key": "AKIAIOSFODNN7EXAMPLE",
"secret_key": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
@ -83,37 +83,32 @@ using either these environment variables will override the use of
`AWS_SHARED_CREDENTIALS_FILE` and `AWS_PROFILE`. The `AWS_DEFAULT_REGION` and
`AWS_SESSION_TOKEN` environment variables are also used, if applicable:
Usage:
```
$ export AWS_ACCESS_KEY_ID="anaccesskey"
$ export AWS_SECRET_ACCESS_KEY="asecretkey"
$ export AWS_DEFAULT_REGION="us-west-2"
$ packer build packer.json
```
$ export AWS_ACCESS_KEY_ID="anaccesskey"
$ export AWS_SECRET_ACCESS_KEY="asecretkey"
$ export AWS_DEFAULT_REGION="us-west-2"
$ packer build packer.json
### Shared Credentials file
You can use an AWS credentials file to specify your credentials. The default
location is &#36;HOME/.aws/credentials on Linux and OS X, or
"%USERPROFILE%.aws\credentials" for Windows users. If we fail to detect
location is $HOME/.aws/credentials on Linux and OS X, or
"%USERPROFILE%.aws\\credentials" for Windows users. If we fail to detect
credentials inline, or in the environment, Packer will check this location. You
can optionally specify a different location in the configuration by setting the
environment with the `AWS_SHARED_CREDENTIALS_FILE` variable.
The format for the credentials file is like so
```
[default]
aws_access_key_id=<your access key id>
aws_secret_access_key=<your secret access key>
```
[default]
aws_access_key_id=<your access key id>
aws_secret_access_key=<your secret access key>
You may also configure the profile to use by setting the `profile`
configuration option, or setting the `AWS_PROFILE` environment variable:
```json
``` json
{
"profile": "customprofile",
"region": "us-east-1",
@ -121,7 +116,6 @@ configuration option, or setting the `AWS_PROFILE` environment variable:
}
```
### IAM Task or Instance Role
Finally, Packer will use credentials provided by the task's or instance's IAM
@ -180,17 +174,13 @@ for Packer to work:
Note that if you'd like to create a spot instance, you must also add:
```
ec2:RequestSpotInstances,
ec2:CancelSpotInstanceRequests,
ec2:DescribeSpotInstanceRequests
```
ec2:RequestSpotInstances,
ec2:CancelSpotInstanceRequests,
ec2:DescribeSpotInstanceRequests
If you have the `spot_price` parameter set to `auto`, you must also add:
```
ec2:DescribeSpotPriceHistory
```
ec2:DescribeSpotPriceHistory
## Troubleshooting
@ -228,11 +218,12 @@ fail. If that's the case, you might see an error like this:
==> amazon-ebs: Error querying AMI: AuthFailure: AWS was not able to validate the provided access credentials
If you suspect your system's date is wrong, you can compare it against
<http://www.time.gov/>. On Linux/OS X, you can run the `date` command to get the
current time. If you're on Linux, you can try setting the time with ntp by
<http://www.time.gov/>. On Linux/OS X, you can run the `date` command to get
the current time. If you're on Linux, you can try setting the time with ntp by
running `sudo ntpd -q`.
### `exceeded wait attempts` while waiting for tasks to complete
We use the AWS SDK's built-in waiters to wait for longer-running tasks to
complete. These waiters have default delays between queries and maximum number
of queries that don't always work for our users.

View File

@ -9,31 +9,46 @@ sidebar_current: 'docs-builders-azure-setup'
# Authorizing Packer Builds in Azure
In order to build VMs in Azure Packer needs 6 configuration options to be specified:
In order to build VMs in Azure Packer needs 6 configuration options to be
specified:
- `subscription_id` - UUID identifying your Azure subscription (where billing is handled)
- `subscription_id` - UUID identifying your Azure subscription (where billing
is handled)
- `client_id` - UUID identifying the Active Directory service principal that will run your Packer builds
- `client_id` - UUID identifying the Active Directory service principal that
will run your Packer builds
- `client_secret` - service principal secret / password
- `resource_group_name` - name of the resource group where your VHD(s) will be stored
- `resource_group_name` - name of the resource group where your VHD(s) will
be stored
- `storage_account` - name of the storage account where your VHD(s) will be stored
- `storage_account` - name of the storage account where your VHD(s) will be
stored
-&gt; Behind the scenes Packer uses the OAuth protocol to authenticate against Azure Active Directory and authorize requests to the Azure Service Management API. These topics are unnecessarily complicated so we will try to ignore them for the rest of this document.<br /><br />You do not need to understand how OAuth works in order to use Packer with Azure, though the Active Directory terms "service principal" and "role" will be useful for understanding Azure's access policies.
-&gt; Behind the scenes Packer uses the OAuth protocol to authenticate against
Azure Active Directory and authorize requests to the Azure Service Management
API. These topics are unnecessarily complicated so we will try to ignore them
for the rest of this document.<br /><br />You do not need to understand how
OAuth works in order to use Packer with Azure, though the Active Directory
terms "service principal" and "role" will be useful for understanding Azure's
access policies.
In order to get all of the items above, you will need a username and password for your Azure account.
In order to get all of the items above, you will need a username and password
for your Azure account.
## Device Login
Device login is an alternative way to authorize in Azure Packer. Device login only requires you to know your
Subscription ID. (Device login is only supported for Linux based VMs.) Device login is intended for those who are first
time users, and just want to ''kick the tires.'' We recommend the SPN approach if you intend to automate Packer.
Device login is an alternative way to authorize in Azure Packer. Device login
only requires you to know your Subscription ID. (Device login is only supported
for Linux based VMs.) Device login is intended for those who are first time
users, and just want to ''kick the tires.'' We recommend the SPN approach if
you intend to automate Packer.
> Device login is for **interactive** builds, and SPN is **automated** builds.
There are three pieces of information you must provide to enable device login mode.
There are three pieces of information you must provide to enable device login
mode.
1. SubscriptionID
2. Resource Group - parent resource group that Packer uses to build an image.
@ -43,31 +58,47 @@ There are three pieces of information you must provide to enable device login mo
> Device login mode is for the Public and US Gov clouds only.
The device login flow asks that you open a web browser, navigate to <http://aka.ms/devicelogin>, and input the supplied
code. This authorizes the Packer for Azure application to act on your behalf. An OAuth token will be created, and stored
in the user's home directory (~/.azure/packer/oauth-TenantID.json). This token is used if the token file exists, and it
is refreshed as necessary. The token file prevents the need to continually execute the device login flow. Packer will ask
for two device login auth, one for service management endpoint and another for accessing temp keyvault secrets that it creates.
The device login flow asks that you open a web browser, navigate to
<http://aka.ms/devicelogin>, and input the supplied code. This authorizes the
Packer for Azure application to act on your behalf. An OAuth token will be
created, and stored in the user's home directory
(~/.azure/packer/oauth-TenantID.json). This token is used if the token file
exists, and it is refreshed as necessary. The token file prevents the need to
continually execute the device login flow. Packer will ask for two device login
auth, one for service management endpoint and another for accessing temp
keyvault secrets that it creates.
## Install the Azure CLI
To get the credentials above, we will need to install the Azure CLI. Please refer to Microsoft's official [installation guide](https://azure.microsoft.com/en-us/documentation/articles/xplat-cli-install/).
To get the credentials above, we will need to install the Azure CLI. Please
refer to Microsoft's official [installation
guide](https://azure.microsoft.com/en-us/documentation/articles/xplat-cli-install/).
-&gt; The guides below also use a tool called [`jq`](https://stedolan.github.io/jq/) to simplify the output from the Azure CLI, though this is optional. If you use homebrew you can simply `brew install node jq`.
-&gt; The guides below also use a tool called
[`jq`](https://stedolan.github.io/jq/) to simplify the output from the Azure
CLI, though this is optional. If you use homebrew you can simply
`brew install node jq`.
You can also use the Azure CLI in Docker. It also comes with `jq` pre-installed:
You can also use the Azure CLI in Docker. It also comes with `jq`
pre-installed:
```shell
``` shell
$ docker run -it microsoft/azure-cli
```
## Guided Setup
The Packer project includes a [setup script](https://github.com/hashicorp/packer/blob/master/contrib/azure-setup.sh) that can help you setup your account. It uses an interactive bash script to log you into Azure, name your resources, and export your Packer configuration.
The Packer project includes a [setup
script](https://github.com/hashicorp/packer/blob/master/contrib/azure-setup.sh)
that can help you setup your account. It uses an interactive bash script to log
you into Azure, name your resources, and export your Packer configuration.
## Manual Setup
If you want more control or the script does not work for you, you can also use the manual instructions below to setup your Azure account. You will need to manually keep track of the various account identifiers, resource names, and your service principal password.
If you want more control or the script does not work for you, you can also use
the manual instructions below to setup your Azure account. You will need to
manually keep track of the various account identifiers, resource names, and
your service principal password.
### Identify Your Tenant and Subscription IDs
@ -78,9 +109,10 @@ $ az login
# Note, we have launched a browser for you to login. For old experience with device code, use "az login --use-device-code"
```
Once you've completed logging in, you should get a JSON array like the one below:
Once you've completed logging in, you should get a JSON array like the one
below:
```shell
``` shell
[
{
"cloudName": "AzureCloud",
@ -95,8 +127,8 @@ Once you've completed logging in, you should get a JSON array like the one below
}
}
]
```
Get your account information
``` shell
@ -105,7 +137,9 @@ $ az account set --subscription ACCOUNTNAME
$ az account show --output json | jq -r '.id'
```
-&gt; Throughout this document when you see a command pipe to `jq` you may instead omit `--output json` and everything after it, but the output will be more verbose. For example you can simply run `az account list` instead.
-&gt; Throughout this document when you see a command pipe to `jq` you may
instead omit `--output json` and everything after it, but the output will be
more verbose. For example you can simply run `az account list` instead.
This will print out one line that look like this:
@ -115,7 +149,10 @@ This is your `subscription_id`. Note it for later.
### Create a Resource Group
A [resource group](https://azure.microsoft.com/en-us/documentation/articles/resource-group-overview/#resource-groups) is used to organize related resources. Resource groups and storage accounts are tied to a location. To see available locations, run:
A [resource
group](https://azure.microsoft.com/en-us/documentation/articles/resource-group-overview/#resource-groups)
is used to organize related resources. Resource groups and storage accounts are
tied to a location. To see available locations, run:
``` shell
$ az account list-locations
@ -126,11 +163,14 @@ $ GROUPNAME=xxx
$ az group create --name $GROUPNAME --location $LOCATION
```
Your storage account (below) will need to use the same `GROUPNAME` and `LOCATION`.
Your storage account (below) will need to use the same `GROUPNAME` and
`LOCATION`.
### Create a Storage Account
We will need to create a storage account where your Packer artifacts will be stored. We will create a `LRS` storage account which is the least expensive price/GB at the time of writing.
We will need to create a storage account where your Packer artifacts will be
stored. We will create a `LRS` storage account which is the least expensive
price/GB at the time of writing.
``` shell
$ az storage account create \
@ -141,22 +181,29 @@ $ az storage account create \
--kind Storage
```
-&gt; `LRS` and `Standard_LRS` are meant as literal "LRS" or "Standard_LRS" and not as variables.
-&gt; `LRS` and `Standard_LRS` are meant as literal "LRS" or "Standard\_LRS"
and not as variables.
Make sure that `GROUPNAME` and `LOCATION` are the same as above. Also, ensure that `GROUPNAME` is less than 24 characters long and contains only lowercase letters and numbers.
Make sure that `GROUPNAME` and `LOCATION` are the same as above. Also, ensure
that `GROUPNAME` is less than 24 characters long and contains only lowercase
letters and numbers.
### Create an Application
An application represents a way to authorize access to the Azure API. Note that you will need to specify a URL for your application (this is intended to be used for OAuth callbacks) but these do not actually need to be valid URLs.
An application represents a way to authorize access to the Azure API. Note that
you will need to specify a URL for your application (this is intended to be
used for OAuth callbacks) but these do not actually need to be valid URLs.
First pick APPNAME, APPURL and PASSWORD:
```shell
``` shell
APPNAME=packer.test
APPURL=packer.test
PASSWORD=xxx
```
Password is your `client_secret` and can be anything you like. I recommend using ```openssl rand -base64 24```.
Password is your `client_secret` and can be anything you like. I recommend
using `openssl rand -base64 24`.
``` shell
$ az ad app create \
@ -168,16 +215,21 @@ $ az ad app create \
### Create a Service Principal
You cannot directly grant permissions to an application. Instead, you create a service principal and assign permissions to the service principal. To create a service principal for use with Packer, run the below command specifying the subscription. This will grant Packer the contributor role to the subscription. The output of this command is your service principal credentials, save these in a safe place as you will need these to configure Packer.
You cannot directly grant permissions to an application. Instead, you create a
service principal and assign permissions to the service principal. To create a
service principal for use with Packer, run the below command specifying the
subscription. This will grant Packer the contributor role to the subscription.
The output of this command is your service principal credentials, save these in
a safe place as you will need these to configure Packer.
```shell
``` shell
az ad sp create-for-rbac -n "Packer" --role contributor \
--scopes /subscriptions/{SubID}
```
The service principal credentials.
```shell
``` shell
{
"appId": "AppId",
"displayName": "Packer",
@ -187,7 +239,9 @@ The service principal credentials.
}
```
There are a lot of pre-defined roles and you can define your own with more granular permissions, though this is out of scope. You can see a list of pre-configured roles via:
There are a lot of pre-defined roles and you can define your own with more
granular permissions, though this is out of scope. You can see a list of
pre-configured roles via:
``` shell
$ az role definition list --output json | jq ".[] | {name:.roleName, description:.description}"
@ -195,4 +249,6 @@ $ az role definition list --output json | jq ".[] | {name:.roleName, description
### Configuring Packer
Now (finally) everything has been setup in Azure and our service principal has been created. You can use the output from creating your service principal in your template.
Now (finally) everything has been setup in Azure and our service principal has
been created. You can use the output from creating your service principal in
your template.

View File

@ -9,73 +9,106 @@ sidebar_current: 'docs-builders-azure'
Type: `azure-arm`
Packer supports building VHDs in [Azure Resource Manager](https://azure.microsoft.com/en-us/documentation/articles/resource-group-overview/). Azure provides new users a [$200 credit for the first 30 days](https://azure.microsoft.com/en-us/free/); after which you will incur costs for VMs built and stored using Packer.
Packer supports building VHDs in [Azure Resource
Manager](https://azure.microsoft.com/en-us/documentation/articles/resource-group-overview/).
Azure provides new users a [$200 credit for the first 30
days](https://azure.microsoft.com/en-us/free/); after which you will incur
costs for VMs built and stored using Packer.
Unlike most Packer builders, the artifact produced by the ARM builder is a VHD (virtual hard disk), not a full virtual machine image. This means you will need to [perform some additional steps](https://github.com/Azure/packer-azure/issues/201) in order to launch a VM from your build artifact.
Unlike most Packer builders, the artifact produced by the ARM builder is a VHD
(virtual hard disk), not a full virtual machine image. This means you will need
to [perform some additional
steps](https://github.com/Azure/packer-azure/issues/201) in order to launch a
VM from your build artifact.
Azure uses a combination of OAuth and Active Directory to authorize requests to the ARM API. Learn how to [authorize access to ARM](/docs/builders/azure-setup.html).
Azure uses a combination of OAuth and Active Directory to authorize requests to
the ARM API. Learn how to [authorize access to
ARM](/docs/builders/azure-setup.html).
The documentation below references command output from the [Azure CLI](https://azure.microsoft.com/en-us/documentation/articles/xplat-cli-install/).
The documentation below references command output from the [Azure
CLI](https://azure.microsoft.com/en-us/documentation/articles/xplat-cli-install/).
## Configuration Reference
The following configuration options are available for building Azure images. In addition to the options listed here, a
The following configuration options are available for building Azure images. In
addition to the options listed here, a
[communicator](/docs/templates/communicator.html) can be configured for this
builder.
### Required:
- `client_id` (string) The Active Directory service principal associated with your builder.
- `client_id` (string) The Active Directory service principal associated with
your builder.
- `client_secret` (string) The password or secret for your service principal.
- `subscription_id` (string) Subscription under which the build will be performed. **The service principal specified in `client_id` must have full access to this subscription, unless build_resource_group_name option is specified in which case it needs to have owner access to the existing resource group specified in build_resource_group_name parameter.**
- `subscription_id` (string) Subscription under which the build will be
performed. **The service principal specified in `client_id` must have full
access to this subscription, unless build\_resource\_group\_name option is
specified in which case it needs to have owner access to the existing
resource group specified in build\_resource\_group\_name parameter.**
- `image_publisher` (string) PublisherName for your base image. See [documentation](https://azure.microsoft.com/en-us/documentation/articles/resource-groups-vm-searching/) for details.
- `image_publisher` (string) PublisherName for your base image. See
[documentation](https://azure.microsoft.com/en-us/documentation/articles/resource-groups-vm-searching/)
for details.
CLI example `az vm image list-publishers --location westus`
- `image_offer` (string) Offer for your base image. See [documentation](https://azure.microsoft.com/en-us/documentation/articles/resource-groups-vm-searching/) for details.
- `image_offer` (string) Offer for your base image. See
[documentation](https://azure.microsoft.com/en-us/documentation/articles/resource-groups-vm-searching/)
for details.
CLI example `az vm image list-offers --location westus --publisher Canonical`
CLI example
`az vm image list-offers --location westus --publisher Canonical`
- `image_sku` (string) SKU for your base image. See [documentation](https://azure.microsoft.com/en-us/documentation/articles/resource-groups-vm-searching/) for details.
- `image_sku` (string) SKU for your base image. See
[documentation](https://azure.microsoft.com/en-us/documentation/articles/resource-groups-vm-searching/)
for details.
CLI example `az vm image list-skus --location westus --publisher Canonical --offer UbuntuServer`
CLI example
`az vm image list-skus --location westus --publisher Canonical --offer UbuntuServer`
#### VHD or Managed Image
The Azure builder can create either a VHD, or a managed image. If you
are creating a VHD, you **must** start with a VHD. Likewise, if you
want to create a managed image you **must** start with a managed
image. When creating a VHD the following two options are required.
The Azure builder can create either a VHD, or a managed image. If you are
creating a VHD, you **must** start with a VHD. Likewise, if you want to create
a managed image you **must** start with a managed image. When creating a VHD
the following two options are required.
- `capture_container_name` (string) Destination container name. Essentially the "directory" where your VHD will be
organized in Azure. The captured VHD's URL will be `https://<storage_account>.blob.core.windows.net/system/Microsoft.Compute/Images/<capture_container_name>/<capture_name_prefix>.xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.vhd`.
- `capture_container_name` (string) Destination container name. Essentially
the "directory" where your VHD will be organized in Azure. The captured
VHD's URL will be
`https://<storage_account>.blob.core.windows.net/system/Microsoft.Compute/Images/<capture_container_name>/<capture_name_prefix>.xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.vhd`.
- `capture_name_prefix` (string) VHD prefix. The final artifacts will be named `PREFIX-osDisk.UUID` and
`PREFIX-vmTemplate.UUID`.
- `capture_name_prefix` (string) VHD prefix. The final artifacts will be
named `PREFIX-osDisk.UUID` and `PREFIX-vmTemplate.UUID`.
- `resource_group_name` (string) Resource group under which the final artifact will be stored.
- `resource_group_name` (string) Resource group under which the final
artifact will be stored.
- `storage_account` (string) Storage account under which the final artifact will be stored.
- `storage_account` (string) Storage account under which the final artifact
will be stored.
When creating a managed image the following two options are required.
- `managed_image_name` (string) Specify the managed image name where the result of the Packer build will be saved. The
image name must not exist ahead of time, and will not be overwritten. If this value is set, the value
`managed_image_resource_group_name` must also be set. See [documentation](https://docs.microsoft.com/en-us/azure/storage/storage-managed-disks-overview#images)
to learn more about managed images.
- `managed_image_name` (string) Specify the managed image name where the
result of the Packer build will be saved. The image name must not exist
ahead of time, and will not be overwritten. If this value is set, the value
`managed_image_resource_group_name` must also be set. See
[documentation](https://docs.microsoft.com/en-us/azure/storage/storage-managed-disks-overview#images)
to learn more about managed images.
- `managed_image_resource_group_name` (string) Specify the managed image resource group name where the result of the Packer build will be
saved. The resource group must already exist. If this value is set, the value `managed_image_name` must also be
set. See [documentation](https://docs.microsoft.com/en-us/azure/storage/storage-managed-disks-overview#images) to
learn more about managed images.
- `managed_image_resource_group_name` (string) Specify the managed image
resource group name where the result of the Packer build will be saved. The
resource group must already exist. If this value is set, the value
`managed_image_name` must also be set. See
[documentation](https://docs.microsoft.com/en-us/azure/storage/storage-managed-disks-overview#images)
to learn more about managed images.
#### Resource Group Usage
The Azure builder can either provision resources into a new resource group that
it controls (default) or an existing one. The advantage of using a packer
it controls (default) or an existing one. The advantage of using a packer
defined resource group is that failed resource cleanup is easier because you
can simply remove the entire resource group, however this means that the
provided credentials must have permission to create and remove resource groups.
@ -92,144 +125,184 @@ To have packer create a resource group you **must** provide:
and optionally:
- `temp_resource_group_name` (string) name assigned to the temporary resource
group created during the build. If this value is not set, a random value will
be assigned. This resource group is deleted at the end of the build.
group created during the build. If this value is not set, a random value
will be assigned. This resource group is deleted at the end of the build.
To use an existing resource group you **must** provide:
- `build_resource_group_name` (string) - Specify an existing resource group
to run the build in.
Providing `temp_resource_group_name` or `location` in combination with `build_resource_group_name` is not allowed.
Providing `temp_resource_group_name` or `location` in combination with
`build_resource_group_name` is not allowed.
### Optional:
- `azure_tags` (object of name/value strings) - the user can define up to 15 tags. Tag names cannot exceed 512
characters, and tag values cannot exceed 256 characters. Tags are applied to every resource deployed by a Packer
- `azure_tags` (object of name/value strings) - the user can define up to 15
tags. Tag names cannot exceed 512 characters, and tag values cannot exceed
256 characters. Tags are applied to every resource deployed by a Packer
build, i.e. Resource Group, VM, NIC, VNET, Public IP, KeyVault, etc.
- `cloud_environment_name` (string) One of `Public`, `China`, `Germany`, or
`USGovernment`. Defaults to `Public`. Long forms such as
`USGovernmentCloud` and `AzureUSGovernmentCloud` are also supported.
- `custom_data_file` (string) Specify a file containing custom data to inject into the cloud-init process. The contents
of the file are read, base64 encoded, and injected into the ARM template. The custom data will be passed to
cloud-init for processing at the time of provisioning. See [documentation](http://cloudinit.readthedocs.io/en/latest/topics/examples.html)
to learn more about custom data, and how it can be used to influence the provisioning process.
- `custom_data_file` (string) Specify a file containing custom data to inject
into the cloud-init process. The contents of the file are read, base64
encoded, and injected into the ARM template. The custom data will be passed
to cloud-init for processing at the time of provisioning. See
[documentation](http://cloudinit.readthedocs.io/en/latest/topics/examples.html)
to learn more about custom data, and how it can be used to influence the
provisioning process.
- `custom_managed_image_name` (string) Specify the source managed image's name to use. If this value is set, do not set
image_publisher, image_offer, image_sku, or image_version. If this value is set, the value
`custom_managed_image_resource_group_name` must also be set. See [documentation](https://docs.microsoft.com/en-us/azure/storage/storage-managed-disks-overview#images)
- `custom_managed_image_name` (string) Specify the source managed image's
name to use. If this value is set, do not set image\_publisher,
image\_offer, image\_sku, or image\_version. If this value is set, the
value `custom_managed_image_resource_group_name` must also be set. See
[documentation](https://docs.microsoft.com/en-us/azure/storage/storage-managed-disks-overview#images)
to learn more about managed images.
- `custom_managed_image_resource_group_name` (string) Specify the source managed image's resource group used to use. If this
value is set, do not set image_publisher, image_offer, image_sku, or image_version. If this value is set, the
value `custom_managed_image_name` must also be set. See [documentation](https://docs.microsoft.com/en-us/azure/storage/storage-managed-disks-overview#images)
- `custom_managed_image_resource_group_name` (string) Specify the source
managed image's resource group used to use. If this value is set, do not
set image\_publisher, image\_offer, image\_sku, or image\_version. If this
value is set, the value `custom_managed_image_name` must also be set. See
[documentation](https://docs.microsoft.com/en-us/azure/storage/storage-managed-disks-overview#images)
to learn more about managed images.
- `image_version` (string) Specify a specific version of an OS to boot from. Defaults to `latest`. There may be a
difference in versions available across regions due to image synchronization latency. To ensure a consistent
version across regions set this value to one that is available in all regions where you are deploying.
- `image_version` (string) Specify a specific version of an OS to boot from.
Defaults to `latest`. There may be a difference in versions available
across regions due to image synchronization latency. To ensure a consistent
version across regions set this value to one that is available in all
regions where you are deploying.
CLI example `az vm image list --location westus --publisher Canonical --offer UbuntuServer --sku 16.04.0-LTS --all`
CLI example
`az vm image list --location westus --publisher Canonical --offer UbuntuServer --sku 16.04.0-LTS --all`
- `image_url` (string) Specify a custom VHD to use. If this value is set, do not set image\_publisher, image\_offer,
image\_sku, or image\_version.
- `image_url` (string) Specify a custom VHD to use. If this value is set, do
not set image\_publisher, image\_offer, image\_sku, or image\_version.
- `managed_image_storage_account_type` (string) Specify the storage
account type for a managed image. Valid values are Standard_LRS
and Premium\_LRS. The default is Standard\_LRS.
- `managed_image_storage_account_type` (string) Specify the storage account
type for a managed image. Valid values are Standard\_LRS and Premium\_LRS.
The default is Standard\_LRS.
- `os_disk_size_gb` (number) Specify the size of the OS disk in GB (gigabytes). Values of zero or less than zero are
ignored.
- `os_disk_size_gb` (number) Specify the size of the OS disk in GB
(gigabytes). Values of zero or less than zero are ignored.
- `disk_additional_size` (array of integers) - The size(s) of any additional
hard disks for the VM in gigabytes. If this is not specified then the VM
will only contain an OS disk. The number of additional disks and maximum size of a disk depends on the configuration of your VM. See [Windows](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/about-disks-and-vhds) or [Linux](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/about-disks-and-vhds) for more information.
will only contain an OS disk. The number of additional disks and maximum
size of a disk depends on the configuration of your VM. See
[Windows](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/about-disks-and-vhds)
or
[Linux](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/about-disks-and-vhds)
for more information.
For VHD builds the final artifacts will be named `PREFIX-dataDisk-<n>.UUID.vhd` and stored in the specified capture container along side the OS disk. The additional disks are included in the deployment template `PREFIX-vmTemplate.UUID`.
For Managed build the final artifacts are included in the managed image. The additional disk will have the same storage account type as the OS disk, as specified with the `managed_image_storage_account_type` setting.
For VHD builds the final artifacts will be named
`PREFIX-dataDisk-<n>.UUID.vhd` and stored in the specified capture
container along side the OS disk. The additional disks are included in the
deployment template `PREFIX-vmTemplate.UUID`.
For Managed build the final artifacts are included in the managed image.
The additional disk will have the same storage account type as the OS disk,
as specified with the `managed_image_storage_account_type` setting.
- `os_type` (string) If either `Linux` or `Windows` is specified Packer will
automatically configure authentication credentials for the provisioned machine. For
`Linux` this configures an SSH authorized key. For `Windows` this
configures a WinRM certificate.
automatically configure authentication credentials for the provisioned
machine. For `Linux` this configures an SSH authorized key. For `Windows`
this configures a WinRM certificate.
- `plan_info` (object) - Used for creating images from Marketplace images. Please refer to [Deploy an image with
Marketplace terms](https://aka.ms/azuremarketplaceapideployment) for more details. Not all Marketplace images
support programmatic deployment, and support is controlled by the image publisher.
- `plan_info` (object) - Used for creating images from Marketplace images.
Please refer to [Deploy an image with Marketplace
terms](https://aka.ms/azuremarketplaceapideployment) for more details. Not
all Marketplace images support programmatic deployment, and support is
controlled by the image publisher.
An example plan_info object is defined below.
An example plan\_info object is defined below.
```json
{
"plan_info": {
"plan_name": "rabbitmq",
"plan_product": "rabbitmq",
"plan_publisher": "bitnami"
}
}
```
``` json
{
"plan_info": {
"plan_name": "rabbitmq",
"plan_product": "rabbitmq",
"plan_publisher": "bitnami"
}
}
```
`plan_name` (string) - The plan name, required.
`plan_product` (string) - The plan product, required.
`plan_publisher` (string) - The plan publisher, required.
`plan_promotion_code` (string) - Some images accept a promotion code, optional.
`plan_name` (string) - The plan name, required. `plan_product` (string) -
The plan product, required. `plan_publisher` (string) - The plan publisher,
required. `plan_promotion_code` (string) - Some images accept a promotion
code, optional.
Images created from the Marketplace with `plan_info` **must** specify `plan_info` whenever the image is deployed.
The builder automatically adds tags to the image to ensure this information is not lost. The following tags are
added.
Images created from the Marketplace with `plan_info` **must** specify
`plan_info` whenever the image is deployed. The builder automatically adds
tags to the image to ensure this information is not lost. The following
tags are added.
1. PlanName
1. PlanProduct
1. PlanPublisher
1. PlanPromotionCode
1. PlanName
2. PlanProduct
3. PlanPublisher
4. PlanPromotionCode
- `shared_image_gallery` (object) Use a [Shared Gallery image](https://azure.microsoft.com/en-us/blog/announcing-the-public-preview-of-shared-image-gallery/) as the source for this build. *VHD targets are incompatible with this build type* - the target must be a *Managed Image*.
```
"shared_image_gallery": {
"subscription": "00000000-0000-0000-0000-00000000000",
"resource_group": "ResourceGroup",
"gallery_name": "GalleryName",
"image_name": "ImageName",
"image_version": "1.0.0"
}
"managed_image_name": "TargetImageName",
"managed_image_resource_group_name": "TargetResourceGroup"
```
- `shared_image_gallery` (object) Use a [Shared Gallery
image](https://azure.microsoft.com/en-us/blog/announcing-the-public-preview-of-shared-image-gallery/)
as the source for this build. *VHD targets are incompatible with this build
type* - the target must be a *Managed Image*.
- `temp_compute_name` (string) temporary name assigned to the VM. If this value is not set, a random value will be
assigned. Knowing the resource group and VM name allows one to execute commands to update the VM during a Packer
build, e.g. attach a resource disk to the VM.
<!-- -->
- `tenant_id` (string) The account identifier with which your `client_id` and `subscription_id` are associated. If not
specified, `tenant_id` will be looked up using `subscription_id`.
"shared_image_gallery": {
"subscription": "00000000-0000-0000-0000-00000000000",
"resource_group": "ResourceGroup",
"gallery_name": "GalleryName",
"image_name": "ImageName",
"image_version": "1.0.0"
}
"managed_image_name": "TargetImageName",
"managed_image_resource_group_name": "TargetResourceGroup"
- `private_virtual_network_with_public_ip` (boolean) This value allows you to set a `virtual_network_name` and obtain
a public IP. If this value is not set and `virtual_network_name` is defined Packer is only allowed to be executed
from a host on the same subnet / virtual network.
- `temp_compute_name` (string) temporary name assigned to the VM. If this
value is not set, a random value will be assigned. Knowing the resource
group and VM name allows one to execute commands to update the VM during a
Packer build, e.g. attach a resource disk to the VM.
- `virtual_network_name` (string) Use a pre-existing virtual network for the VM. This option enables private
communication with the VM, no public IP address is **used** or **provisioned** (unless you set `private_virtual_network_with_public_ip`).
- `tenant_id` (string) The account identifier with which your `client_id` and
`subscription_id` are associated. If not specified, `tenant_id` will be
looked up using `subscription_id`.
- `virtual_network_resource_group_name` (string) If virtual\_network\_name is set, this value **may** also be set. If
virtual\_network\_name is set, and this value is not set the builder attempts to determine the resource group
containing the virtual network. If the resource group cannot be found, or it cannot be disambiguated, this value
should be set.
- `private_virtual_network_with_public_ip` (boolean) This value allows you to
set a `virtual_network_name` and obtain a public IP. If this value is not
set and `virtual_network_name` is defined Packer is only allowed to be
executed from a host on the same subnet / virtual network.
- `virtual_network_subnet_name` (string) If virtual\_network\_name is set, this value **may** also be set. If
virtual\_network\_name is set, and this value is not set the builder attempts to determine the subnet to use with
the virtual network. If the subnet cannot be found, or it cannot be disambiguated, this value should be set.
- `virtual_network_name` (string) Use a pre-existing virtual network for the
VM. This option enables private communication with the VM, no public IP
address is **used** or **provisioned** (unless you set
`private_virtual_network_with_public_ip`).
- `virtual_network_resource_group_name` (string) If virtual\_network\_name is
set, this value **may** also be set. If virtual\_network\_name is set, and
this value is not set the builder attempts to determine the resource group
containing the virtual network. If the resource group cannot be found, or
it cannot be disambiguated, this value should be set.
- `virtual_network_subnet_name` (string) If virtual\_network\_name is set,
this value **may** also be set. If virtual\_network\_name is set, and this
value is not set the builder attempts to determine the subnet to use with
the virtual network. If the subnet cannot be found, or it cannot be
disambiguated, this value should be set.
- `vm_size` (string) Size of the VM used for building. This can be changed
when you deploy a VM from your VHD. See
[pricing](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/) information. Defaults to `Standard_A1`.
[pricing](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/)
information. Defaults to `Standard_A1`.
CLI example `az vm list-sizes --location westus`
- `async_resourcegroup_delete` (boolean) If you want packer to delete the temporary resource group asynchronously set this value. It's a boolean value
and defaults to false. **Important** Setting this true means that your builds are faster, however any failed deletes are not reported.
- `async_resourcegroup_delete` (boolean) If you want packer to delete the
temporary resource group asynchronously set this value. It's a boolean
value and defaults to false. **Important** Setting this true means that
your builds are faster, however any failed deletes are not reported.
## Basic Example
@ -265,13 +338,22 @@ Here is a basic example for Azure.
## Deprovision
Azure VMs should be deprovisioned at the end of every build. For Windows this means executing sysprep, and for Linux this means executing the waagent deprovision process.
Azure VMs should be deprovisioned at the end of every build. For Windows this
means executing sysprep, and for Linux this means executing the waagent
deprovision process.
Please refer to the Azure [examples](https://github.com/hashicorp/packer/tree/master/examples/azure) for complete examples showing the deprovision process.
Please refer to the Azure
[examples](https://github.com/hashicorp/packer/tree/master/examples/azure) for
complete examples showing the deprovision process.
### Windows
The following provisioner snippet shows how to sysprep a Windows VM. Deprovision should be the last operation executed by a build. The code below will wait for sysprep to write the image status in the registry and will exit after that. The possible states, in case you want to wait for another state, [are documented here](https://technet.microsoft.com/en-us/library/hh824815.aspx)
The following provisioner snippet shows how to sysprep a Windows VM.
Deprovision should be the last operation executed by a build. The code below
will wait for sysprep to write the image status in the registry and will exit
after that. The possible states, in case you want to wait for another state,
[are documented
here](https://technet.microsoft.com/en-us/library/hh824815.aspx)
``` json
{
@ -289,7 +371,8 @@ The following provisioner snippet shows how to sysprep a Windows VM. Deprovision
### Linux
The following provisioner snippet shows how to deprovision a Linux VM. Deprovision should be the last operation executed by a build.
The following provisioner snippet shows how to deprovision a Linux VM.
Deprovision should be the last operation executed by a build.
``` json
{
@ -306,43 +389,59 @@ The following provisioner snippet shows how to deprovision a Linux VM. Deprovisi
}
```
To learn more about the Linux deprovision process please see WALinuxAgent's [README](https://github.com/Azure/WALinuxAgent/blob/master/README.md).
To learn more about the Linux deprovision process please see WALinuxAgent's
[README](https://github.com/Azure/WALinuxAgent/blob/master/README.md).
#### skip\_clean
Customers have reported issues with the deprovision process where the builder hangs. The error message is similar to the following.
Customers have reported issues with the deprovision process where the builder
hangs. The error message is similar to the following.
Build 'azure-arm' errored: Retryable error: Error removing temporary script at /tmp/script_9899.sh: ssh: handshake failed: EOF
One solution is to set skip\_clean to true in the provisioner. This prevents Packer from cleaning up any helper scripts uploaded to the VM during the build.
One solution is to set skip\_clean to true in the provisioner. This prevents
Packer from cleaning up any helper scripts uploaded to the VM during the build.
## Defaults
The Azure builder attempts to pick default values that provide for a just works experience. These values can be changed by the user to more suitable values.
The Azure builder attempts to pick default values that provide for a just works
experience. These values can be changed by the user to more suitable values.
- The default user name is packer not root as in other builders. Most distros on Azure do not allow root to SSH to a VM hence the need for a non-root default user. Set the ssh\_username option to override the default value.
- The default VM size is Standard\_A1. Set the vm\_size option to override the default value.
- The default image version is latest. Set the image\_version option to override the default value.
- By default a temporary resource group will be created and destroyed as part of the build. If you do not have permissions to do so, use `build_resource_group_name` to specify an existing resource group to run the build in.
- The default user name is packer not root as in other builders. Most distros
on Azure do not allow root to SSH to a VM hence the need for a non-root
default user. Set the ssh\_username option to override the default value.
- The default VM size is Standard\_A1. Set the vm\_size option to override
the default value.
- The default image version is latest. Set the image\_version option to
override the default value.
- By default a temporary resource group will be created and destroyed as part
of the build. If you do not have permissions to do so, use
`build_resource_group_name` to specify an existing resource group to run
the build in.
## Implementation
~&gt; **Warning!** This is an advanced topic. You do not need to understand the implementation to use the Azure
builder.
~&gt; **Warning!** This is an advanced topic. You do not need to understand the
implementation to use the Azure builder.
The Azure builder uses ARM
[templates](https://azure.microsoft.com/en-us/documentation/articles/resource-group-authoring-templates/) to deploy
resources. ARM templates allow you to express the what without having to express the how.
[templates](https://azure.microsoft.com/en-us/documentation/articles/resource-group-authoring-templates/)
to deploy resources. ARM templates allow you to express the what without having
to express the how.
The Azure builder works under the assumption that it creates everything it needs to execute a build. When the build has
completed it simply deletes the resource group to cleanup any runtime resources. Resource groups are named using the
form `packer-Resource-Group-<random>`. The value `<random>` is a random value that is generated at every invocation of
packer. The `<random>` value is re-used as much as possible when naming resources, so users can better identify and
group these transient resources when seen in their subscription.
The Azure builder works under the assumption that it creates everything it
needs to execute a build. When the build has completed it simply deletes the
resource group to cleanup any runtime resources. Resource groups are named
using the form `packer-Resource-Group-<random>`. The value `<random>` is a
random value that is generated at every invocation of packer. The `<random>`
value is re-used as much as possible when naming resources, so users can better
identify and group these transient resources when seen in their subscription.
> The VHD is created on a user specified storage account, not a random one created at runtime. When a virtual machine
> is captured the resulting VHD is stored on the same storage account as the source VHD. The VHD created by Packer must
> persist after a build is complete, which is why the storage account is set by the user.
> The VHD is created on a user specified storage account, not a random one
> created at runtime. When a virtual machine is captured the resulting VHD is
> stored on the same storage account as the source VHD. The VHD created by
> Packer must persist after a build is complete, which is why the storage
> account is set by the user.
The basic steps for a build are:
@ -353,40 +452,54 @@ The basic steps for a build are:
5. Delete the resource group.
6. Delete the temporary VM's OS disk.
The templates used for a build are currently fixed in the code. There is a template for Linux, Windows, and KeyVault.
The templates are themselves templated with place holders for names, passwords, SSH keys, certificates, etc.
The templates used for a build are currently fixed in the code. There is a
template for Linux, Windows, and KeyVault. The templates are themselves
templated with place holders for names, passwords, SSH keys, certificates, etc.
### What's Randomized?
The Azure builder creates the following random values at runtime.
- Administrator Password: a random 32-character value using the *password alphabet*.
- Certificate: a 2,048-bit certificate used to secure WinRM communication. The certificate is valid for 24-hours, which starts roughly at invocation time.
- Certificate Password: a random 32-character value using the *password alphabet* used to protect the private key of the certificate.
- Compute Name: a random 15-character name prefixed with pkrvm; the name of the VM.
- Deployment Name: a random 15-character name prefixed with pkfdp; the name of the deployment.
- Administrator Password: a random 32-character value using the *password
alphabet*.
- Certificate: a 2,048-bit certificate used to secure WinRM communication.
The certificate is valid for 24-hours, which starts roughly at invocation
time.
- Certificate Password: a random 32-character value using the *password
alphabet* used to protect the private key of the certificate.
- Compute Name: a random 15-character name prefixed with pkrvm; the name of
the VM.
- Deployment Name: a random 15-character name prefixed with pkfdp; the name
of the deployment.
- KeyVault Name: a random 15-character name prefixed with pkrkv.
- NIC Name: a random 15-character name prefixed with pkrni.
- Public IP Name: a random 15-character name prefixed with pkrip.
- OS Disk Name: a random 15-character name prefixed with pkros.
- Resource Group Name: a random 33-character name prefixed with packer-Resource-Group-.
- Resource Group Name: a random 33-character name prefixed with
packer-Resource-Group-.
- Subnet Name: a random 15-character name prefixed with pkrsn.
- SSH Key Pair: a 2,048-bit asymmetric key pair; can be overridden by the user.
- SSH Key Pair: a 2,048-bit asymmetric key pair; can be overridden by the
user.
- Virtual Network Name: a random 15-character name prefixed with pkrvn.
The default alphabet used for random values is **0123456789bcdfghjklmnpqrstvwxyz**. The alphabet was reduced (no
vowels) to prevent running afoul of Azure decency controls.
The default alphabet used for random values is
**0123456789bcdfghjklmnpqrstvwxyz**. The alphabet was reduced (no vowels) to
prevent running afoul of Azure decency controls.
The password alphabet used for random values is **0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ**.
The password alphabet used for random values is
**0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ**.
### Windows
The Windows implementation is very similar to the Linux build, with the exception that it deploys a template to
configure KeyVault. Packer communicates with a Windows VM using the WinRM protocol. Windows VMs on Azure default to
using both password and certificate based authentication for WinRM. The password is easily set via the VM ARM template,
but the certificate requires an intermediary. The intermediary for Azure is KeyVault. The certificate is uploaded to a
new KeyVault provisioned in the same resource group as the VM. When the Windows VM is deployed, it links to the
certificate in KeyVault, and Azure will ensure the certificate is injected as part of deployment.
The Windows implementation is very similar to the Linux build, with the
exception that it deploys a template to configure KeyVault. Packer communicates
with a Windows VM using the WinRM protocol. Windows VMs on Azure default to
using both password and certificate based authentication for WinRM. The
password is easily set via the VM ARM template, but the certificate requires an
intermediary. The intermediary for Azure is KeyVault. The certificate is
uploaded to a new KeyVault provisioned in the same resource group as the VM.
When the Windows VM is deployed, it links to the certificate in KeyVault, and
Azure will ensure the certificate is injected as part of deployment.
The basic steps for a Windows build are:
@ -398,10 +511,11 @@ The basic steps for a Windows build are:
6. Delete the resource group.
7. Delete the temporary VM's OS disk.
A Windows build requires two templates and two deployments. Unfortunately, the KeyVault and VM cannot be deployed at
the same time hence the need for two templates and deployments. The time required to deploy a KeyVault template is
A Windows build requires two templates and two deployments. Unfortunately, the
KeyVault and VM cannot be deployed at the same time hence the need for two
templates and deployments. The time required to deploy a KeyVault template is
minimal, so overall impact is small.
See the [examples/azure](https://github.com/hashicorp/packer/tree/master/examples/azure) folder in the packer project
for more examples.
See the
[examples/azure](https://github.com/hashicorp/packer/tree/master/examples/azure)
folder in the packer project for more examples.

View File

@ -33,30 +33,30 @@ builder.
### Required:
- `api_url` (string) - The CloudStack API endpoint we will connect to.
It can also be specified via environment variable `CLOUDSTACK_API_URL`,
if set.
- `api_url` (string) - The CloudStack API endpoint we will connect to. It can
also be specified via environment variable `CLOUDSTACK_API_URL`, if set.
- `api_key` (string) - The API key used to sign all API requests. It
can also be specified via environment variable `CLOUDSTACK_API_KEY`,
if set.
- `api_key` (string) - The API key used to sign all API requests. It can also
be specified via environment variable `CLOUDSTACK_API_KEY`, if set.
- `network` (string) - The name or ID of the network to connect the instance
to.
- `secret_key` (string) - The secret key used to sign all API requests.
It can also be specified via environment variable `CLOUDSTACK_SECRET_KEY`,
if set.
- `secret_key` (string) - The secret key used to sign all API requests. It
can also be specified via environment variable `CLOUDSTACK_SECRET_KEY`, if
set.
- `service_offering` (string) - The name or ID of the service offering used
for the instance.
- `source_iso` (string) - The name or ID of an ISO that will be mounted before
booting the instance. This option is mutually exclusive with `source_template`.
When using `source_iso`, both `disk_offering` and `hypervisor` are required.
- `source_iso` (string) - The name or ID of an ISO that will be mounted
before booting the instance. This option is mutually exclusive with
`source_template`. When using `source_iso`, both `disk_offering` and
`hypervisor` are required.
- `source_template` (string) - The name or ID of the template used as base
template for the instance. This option is mutually exclusive with `source_iso`.
template for the instance. This option is mutually exclusive with
`source_iso`.
- `template_os` (string) - The name or ID of the template OS for the new
template that will be created.
@ -71,8 +71,8 @@ builder.
- `cidr_list` (array) - List of CIDR's that will have access to the new
instance. This is needed in order for any provisioners to be able to
connect to the instance. Defaults to `[ "0.0.0.0/0" ]`. Only required
when `use_local_ip_address` is `false`.
connect to the instance. Defaults to `[ "0.0.0.0/0" ]`. Only required when
`use_local_ip_address` is `false`.
- `create_security_group` (boolean) - If `true` a temporary security group
will be created which allows traffic towards the instance from the
@ -83,22 +83,23 @@ builder.
instance. This option is only available (and also required) when using
`source_iso`.
- `disk_size` (number) - The size (in GB) of the root disk of the new instance.
This option is only available when using `source_template`.
- `disk_size` (number) - The size (in GB) of the root disk of the new
instance. This option is only available when using `source_template`.
- `expunge` (boolean) - Set to `true` to expunge the instance when it is
destroyed. Defaults to `false`.
- `http_directory` (string) - Path to a directory to serve using an
HTTP server. The files in this directory will be available over HTTP that
will be requestable from the virtual machine. This is useful for hosting
kickstart files and so on. By default this is "", which means no HTTP server
will be started. The address and port of the HTTP server will be available
as variables in `user_data`. This is covered in more detail below.
- `http_directory` (string) - Path to a directory to serve using an HTTP
server. The files in this directory will be available over HTTP that will
be requestable from the virtual machine. This is useful for hosting
kickstart files and so on. By default this is "", which means no HTTP
server will be started. The address and port of the HTTP server will be
available as variables in `user_data`. This is covered in more detail
below.
- `http_get_only` (boolean) - Some cloud providers only allow HTTP GET calls to
their CloudStack API. If using such a provider, you need to set this to `true`
in order for the provider to only make GET calls and no POST calls.
- `http_get_only` (boolean) - Some cloud providers only allow HTTP GET calls
to their CloudStack API. If using such a provider, you need to set this to
`true` in order for the provider to only make GET calls and no POST calls.
- `http_port_min` and `http_port_max` (number) - These are the minimum and
maximum port to use for the HTTP server started to serve the
@ -117,10 +118,11 @@ builder.
- `instance_name` (string) - The name of the instance. Defaults to
"packer-UUID" where UUID is dynamically generated.
- `prevent_firewall_changes` (boolean) - Set to `true` to prevent network ACLs
or firewall rules creation. Defaults to `false`.
- `prevent_firewall_changes` (boolean) - Set to `true` to prevent network
ACLs or firewall rules creation. Defaults to `false`.
- `project` (string) - The name or ID of the project to deploy the instance to.
- `project` (string) - The name or ID of the project to deploy the instance
to.
- `public_ip_address` (string) - The public IP address or it's ID used for
connecting any provisioners to. If not provided, a temporary public IP
@ -130,18 +132,19 @@ builder.
forwarding rule. Set this attribute if you do not want to use the a random
public port.
- `security_groups` (array of strings) - A list of security group IDs or names
to associate the instance with.
- `security_groups` (array of strings) - A list of security group IDs or
names to associate the instance with.
- `ssh_agent_auth` (boolean) - If true, the local SSH agent will be used to
authenticate connections to the source instance. No temporary keypair will
be created, and the values of `ssh_password` and `ssh_private_key_file` will
be ignored. To use this option with a key pair already configured in the source
image, leave the `keypair` blank. To associate an existing key pair
with the source instance, set the `keypair` field to the name of the key pair.
be created, and the values of `ssh_password` and `ssh_private_key_file`
will be ignored. To use this option with a key pair already configured in
the source image, leave the `keypair` blank. To associate an existing key
pair with the source instance, set the `keypair` field to the name of the
key pair.
- `ssl_no_verify` (boolean) - Set to `true` to skip SSL verification. Defaults
to `false`.
- `ssl_no_verify` (boolean) - Set to `true` to skip SSL verification.
Defaults to `false`.
- `template_display_text` (string) - The display text of the new template.
Defaults to the `template_name`.
@ -152,29 +155,30 @@ builder.
- `template_name` (string) - The name of the new template. Defaults to
"packer-{{timestamp}}" where timestamp will be the current time.
- `template_public` (boolean) - Set to `true` to indicate that the template is
available for all accounts. Defaults to `false`.
- `template_public` (boolean) - Set to `true` to indicate that the template
is available for all accounts. Defaults to `false`.
- `template_password_enabled` (boolean) - Set to `true` to indicate the template
should be password enabled. Defaults to `false`.
- `template_password_enabled` (boolean) - Set to `true` to indicate the
template should be password enabled. Defaults to `false`.
- `template_requires_hvm` (boolean) - Set to `true` to indicate the template
requires hardware-assisted virtualization. Defaults to `false`.
- `template_scalable` (boolean) - Set to `true` to indicate that the template
contains tools to support dynamic scaling of VM cpu/memory. Defaults to `false`.
contains tools to support dynamic scaling of VM cpu/memory. Defaults to
`false`.
- `temporary_keypair_name` (string) - The name of the temporary SSH key pair
to generate. By default, Packer generates a name that looks like
`packer_<UUID>`, where &lt;UUID&gt; is a 36 character unique identifier.
- `user_data` (string) - User data to launch with the instance. This is a
[template engine](/docs/templates/engine.html) see _User Data_ bellow for more
details.
[template engine](/docs/templates/engine.html) see *User Data* bellow for
more details.
- `user_data_file` (string) - Path to a file that will be used for the user
data when launching the instance. This file will be parsed as a
[template engine](/docs/templates/engine.html) see _User Data_ bellow for more
data when launching the instance. This file will be parsed as a [template
engine](/docs/templates/engine.html) see *User Data* bellow for more
details.
- `use_local_ip_address` (boolean) - Set to `true` to indicate that the
@ -184,7 +188,7 @@ builder.
The available variables are:
- `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server
- `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server
that is started serving the directory specified by the `http_directory`
configuration parameter. If `http_directory` isn't specified, these will be
blank.

View File

@ -12,5 +12,6 @@ sidebar_current: 'docs-builders-custom'
Packer is extensible, allowing you to write new builders without having to
modify the core source code of Packer itself. Documentation for creating new
builders is covered in the [custom builders](/docs/extending/custom-builders.html) page of
the Packer plugin section.
builders is covered in the [custom
builders](/docs/extending/custom-builders.html) page of the Packer plugin
section.

View File

@ -1,10 +1,10 @@
---
description: |
The digitalocean Packer builder is able to create new images for use with
DigitalOcean. The builder takes a source image, runs any provisioning
necessary on the image after launching it, then snapshots it into a reusable
image. This reusable image can then be used as the foundation of new servers
that are launched within DigitalOcean.
DigitalOcean. The builder takes a source image, runs any provisioning necessary
on the image after launching it, then snapshots it into a reusable image. This
reusable image can then be used as the foundation of new servers that are
launched within DigitalOcean.
layout: docs
page_title: 'DigitalOcean - Builders'
sidebar_current: 'docs-builders-digitalocean'
@ -17,8 +17,8 @@ Type: `digitalocean`
The `digitalocean` Packer builder is able to create new images for use with
[DigitalOcean](https://www.digitalocean.com). The builder takes a source image,
runs any provisioning necessary on the image after launching it, then snapshots
it into a reusable image. This reusable image can then be used as the foundation
of new servers that are launched within DigitalOcean.
it into a reusable image. This reusable image can then be used as the
foundation of new servers that are launched within DigitalOcean.
The builder does *not* manage images. Once it creates an image, it is up to you
to use it or delete it.
@ -36,19 +36,19 @@ builder.
### Required:
- `api_token` (string) - The client TOKEN to use to access your account. It
can also be specified via environment variable `DIGITALOCEAN_API_TOKEN`,
if set.
can also be specified via environment variable `DIGITALOCEAN_API_TOKEN`, if
set.
- `image` (string) - The name (or slug) of the base image to use. This is the
image that will be used to launch a new droplet and provision it. See
<https://developers.digitalocean.com/documentation/v2/#list-all-images> for
details on how to get a list of the accepted image names/slugs.
- `region` (string) - The name (or slug) of the region to launch the
droplet in. Consequently, this is the region where the snapshot will
be available. See
<https://developers.digitalocean.com/documentation/v2/#list-all-regions> for
the accepted region names/slugs.
- `region` (string) - The name (or slug) of the region to launch the droplet
in. Consequently, this is the region where the snapshot will be available.
See
<https://developers.digitalocean.com/documentation/v2/#list-all-regions>
for the accepted region names/slugs.
- `size` (string) - The name (or slug) of the droplet size to use. See
<https://developers.digitalocean.com/documentation/v2/#list-all-sizes> for
@ -66,18 +66,18 @@ builder.
- `private_networking` (boolean) - Set to `true` to enable private networking
for the droplet being created. This defaults to `false`, or not enabled.
- `monitoring` (boolean) - Set to `true` to enable monitoring
for the droplet being created. This defaults to `false`, or not enabled.
- `monitoring` (boolean) - Set to `true` to enable monitoring for the droplet
being created. This defaults to `false`, or not enabled.
- `ipv6` (boolean) - Set to `true` to enable ipv6
for the droplet being created. This defaults to `false`, or not enabled.
- `ipv6` (boolean) - Set to `true` to enable ipv6 for the droplet being
created. This defaults to `false`, or not enabled.
- `snapshot_name` (string) - The name of the resulting snapshot that will
appear in your account. Defaults to "packer-{{timestamp}}" (see
[configuration templates](/docs/templates/engine.html) for more info).
- `snapshot_regions` (array of strings) - The regions of the resulting snapshot that will
appear in your account.
- `snapshot_regions` (array of strings) - The regions of the resulting
snapshot that will appear in your account.
- `state_timeout` (string) - The time to wait, as a duration string, for a
droplet to enter a desired state (such as "active") before timing out. The

View File

@ -1,8 +1,8 @@
---
description: |
The docker Packer builder builds Docker images using Docker. The builder
starts a Docker container, runs provisioners within this container, then
exports the container for reuse or commits the image.
The docker Packer builder builds Docker images using Docker. The builder starts
a Docker container, runs provisioners within this container, then exports the
container for reuse or commits the image.
layout: docs
page_title: 'Docker - Builders'
sidebar_current: 'docs-builders-docker'
@ -19,21 +19,20 @@ container, then exports the container for reuse or commits the image.
Packer builds Docker containers *without* the use of
[Dockerfiles](https://docs.docker.com/engine/reference/builder/). By not using
`Dockerfiles`, Packer is able to provision containers with portable scripts or
configuration management systems that are not tied to Docker in any way. It also
has a simple mental model: you provision containers much the same way you
configuration management systems that are not tied to Docker in any way. It
also has a simple mental model: you provision containers much the same way you
provision a normal virtualized or dedicated server. For more information, read
the section on [Dockerfiles](#dockerfiles).
The Docker builder must run on a machine that has Docker Engine installed.
Therefore the builder only works on machines that support Docker and _does not
support running on a Docker remote host_. You can learn about what
[platforms Docker supports and how to install onto them](https://docs.docker.com/engine/installation/)
in the Docker documentation.
Therefore the builder only works on machines that support Docker and *does not
support running on a Docker remote host*. You can learn about what [platforms
Docker supports and how to install onto
them](https://docs.docker.com/engine/installation/) in the Docker
documentation.
Please note: Packer does not yet have support for Windows containers.
## Basic Example: Export
Below is a fully functioning example. It doesn't do anything useful, since no
@ -49,9 +48,9 @@ provisioners are defined, but it will effectively repackage an image.
## Basic Example: Commit
Below is another example, the same as above but instead of exporting the running
container, this one commits the container to an image. The image can then be
more easily tagged, pushed, etc.
Below is another example, the same as above but instead of exporting the
running container, this one commits the container to an image. The image can
then be more easily tagged, pushed, etc.
``` json
{
@ -102,7 +101,8 @@ Allowed metadata fields that can be changed are:
- EX: `"ENTRYPOINT /var/www/start.sh"`
- ENV
- String, note there is no equal sign:
- EX: `"ENV HOSTNAME www.example.com"` not `"ENV HOSTNAME=www.example.com"`
- EX: `"ENV HOSTNAME www.example.com"` not
`"ENV HOSTNAME=www.example.com"`
- EXPOSE
- String, space separated ports
- EX: `"EXPOSE 80 443"`
@ -131,7 +131,7 @@ Configuration options are organized below into two categories: required and
optional. Within each category, the available options are alphabetized and
described.
The Docker builder uses a special Docker communicator _and will not use_ the
The Docker builder uses a special Docker communicator *and will not use* the
standard [communicators](/docs/templates/communicator.html).
### Required:
@ -145,50 +145,53 @@ You must specify (only) one of `commit`, `discard`, or `export_path`.
This is useful for the [artifice
post-processor](https://www.packer.io/docs/post-processors/artifice.html).
- `export_path` (string) - The path where the final container will be exported
as a tar file.
- `export_path` (string) - The path where the final container will be
exported as a tar file.
- `image` (string) - The base image for the Docker container that will
be started. This image will be pulled from the Docker registry if it doesn't
- `image` (string) - The base image for the Docker container that will be
started. This image will be pulled from the Docker registry if it doesn't
already exist.
### Optional:
- `author` (string) - Set the author (e-mail) of a commit.
- `aws_access_key` (string) - The AWS access key used to communicate with AWS.
[Learn how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials)
- `aws_access_key` (string) - The AWS access key used to communicate with
AWS. [Learn how to set
this.](/docs/builders/amazon.html#specifying-amazon-credentials)
- `aws_secret_key` (string) - The AWS secret key used to communicate with AWS.
[Learn how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials)
- `aws_secret_key` (string) - The AWS secret key used to communicate with
AWS. [Learn how to set
this.](/docs/builders/amazon.html#specifying-amazon-credentials)
- `aws_token` (string) - The AWS access token to use. This is different from the
access key and secret key. If you're not sure what this is, then you
- `aws_token` (string) - The AWS access token to use. This is different from
the access key and secret key. If you're not sure what this is, then you
probably don't need it. This will also be read from the `AWS_SESSION_TOKEN`
environmental variable.
- `aws_profile` (string) - The AWS shared credentials profile used to communicate with AWS.
[Learn how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials)
- `aws_profile` (string) - The AWS shared credentials profile used to
communicate with AWS. [Learn how to set
this.](/docs/builders/amazon.html#specifying-amazon-credentials)
- `changes` (array of strings) - Dockerfile instructions to add to the commit.
Example of instructions are `CMD`, `ENTRYPOINT`, `ENV`, and `EXPOSE`. Example:
`[ "USER ubuntu", "WORKDIR /app", "EXPOSE 8080" ]`
- `changes` (array of strings) - Dockerfile instructions to add to the
commit. Example of instructions are `CMD`, `ENTRYPOINT`, `ENV`, and
`EXPOSE`. Example: `[ "USER ubuntu", "WORKDIR /app", "EXPOSE 8080" ]`
- `ecr_login` (boolean) - Defaults to false. If true, the builder will login in
order to pull the image from
[Amazon EC2 Container Registry (ECR)](https://aws.amazon.com/ecr/).
The builder only logs in for the duration of the pull. If true
`login_server` is required and `login`, `login_username`, and
`login_password` will be ignored. For more information see the
[section on ECR](#amazon-ec2-container-registry).
- `ecr_login` (boolean) - Defaults to false. If true, the builder will login
in order to pull the image from [Amazon EC2 Container Registry
(ECR)](https://aws.amazon.com/ecr/). The builder only logs in for the
duration of the pull. If true `login_server` is required and `login`,
`login_username`, and `login_password` will be ignored. For more
information see the [section on ECR](#amazon-ec2-container-registry).
* `exec_user` (string) - Username or UID (format: <name|uid>[:<group|gid>])
to run remote commands with. You may need this if you get permission errors
trying to run the `shell` or other provisioners.
- `exec_user` (string) - Username or UID (format:
&lt;name\|uid&gt;\[:&lt;group\|gid&gt;\]) to run remote commands with. You
may need this if you get permission errors trying to run the `shell` or
other provisioners.
- `login` (boolean) - Defaults to false. If true, the builder will login in
order to pull the image. The builder only logs in for the duration of
the pull. It always logs out afterwards. For log into ECR see `ecr_login`.
order to pull the image. The builder only logs in for the duration of the
pull. It always logs out afterwards. For log into ECR see `ecr_login`.
- `login_username` (string) - The username to use to authenticate to login.
@ -211,16 +214,18 @@ You must specify (only) one of `commit`, `discard`, or `export_path`.
couple template variables to customize, as well.
- `volumes` (map of strings to strings) - A mapping of additional volumes to
mount into this container. The key of the object is the host path, the value
is the container path.
mount into this container. The key of the object is the host path, the
value is the container path.
- `container_dir` (string) - The directory inside container to mount
temp directory from host server for work [file provisioner](/docs/provisioners/file.html).
By default this is set to `/packer-files`.
- `container_dir` (string) - The directory inside container to mount temp
directory from host server for work [file
provisioner](/docs/provisioners/file.html). By default this is set to
`/packer-files`.
- `fix_upload_owner` (boolean) - If true, files uploaded to the container will
be owned by the user the container is running as. If false, the owner will depend
on the version of docker installed in the system. Defaults to true.
- `fix_upload_owner` (boolean) - If true, files uploaded to the container
will be owned by the user the container is running as. If false, the owner
will depend on the version of docker installed in the system. Defaults to
true.
## Using the Artifact: Export
@ -234,8 +239,8 @@ with the [docker-import](/docs/post-processors/docker-import.html) and
If you set `commit`, see the next section.
The example below shows a full configuration that would import and push the
created image. This is accomplished using a sequence definition (a collection of
post-processors that are treated as as single pipeline, see
created image. This is accomplished using a sequence definition (a collection
of post-processors that are treated as as single pipeline, see
[Post-Processors](/docs/templates/post-processors.html) for more information):
``` json
@ -256,8 +261,8 @@ post-processors that are treated as as single pipeline, see
In the above example, the result of each builder is passed through the defined
sequence of post-processors starting first with the `docker-import`
post-processor which will import the artifact as a docker image. The resulting
docker image is then passed on to the `docker-push` post-processor which handles
pushing the image to a container repository.
docker image is then passed on to the `docker-push` post-processor which
handles pushing the image to a container repository.
If you want to do this manually, however, perhaps from a script, you can import
the image using the process below:
@ -273,9 +278,10 @@ and `docker push`, respectively.
If you committed your container to an image, you probably want to tag, save,
push, etc. Packer can do this automatically for you. An example is shown below
which tags and pushes an image. This is accomplished using a sequence definition
(a collection of post-processors that are treated as as single pipeline, see
[Post-Processors](/docs/templates/post-processors.html) for more information):
which tags and pushes an image. This is accomplished using a sequence
definition (a collection of post-processors that are treated as as single
pipeline, see [Post-Processors](/docs/templates/post-processors.html) for more
information):
``` json
{
@ -294,9 +300,10 @@ which tags and pushes an image. This is accomplished using a sequence definition
In the above example, the result of each builder is passed through the defined
sequence of post-processors starting first with the `docker-tag` post-processor
which tags the committed image with the supplied repository and tag information.
Once tagged, the resulting artifact is then passed on to the `docker-push`
post-processor which handles pushing the image to a container repository.
which tags the committed image with the supplied repository and tag
information. Once tagged, the resulting artifact is then passed on to the
`docker-push` post-processor which handles pushing the image to a container
repository.
Going a step further, if you wanted to tag and push an image to multiple
container repositories, this could be accomplished by defining two,
@ -329,10 +336,9 @@ nearly-identical sequence definitions, as demonstrated by the example below:
## Amazon EC2 Container Registry
Packer can tag and push images for use in
[Amazon EC2 Container Registry](https://aws.amazon.com/ecr/). The post
processors work as described above and example configuration properties are
shown below:
Packer can tag and push images for use in [Amazon EC2 Container
Registry](https://aws.amazon.com/ecr/). The post processors work as described
above and example configuration properties are shown below:
``` json
{
@ -355,7 +361,8 @@ shown below:
}
```
[Learn how to set Amazon AWS credentials.](/docs/builders/amazon.html#specifying-amazon-credentials)
[Learn how to set Amazon AWS
credentials.](/docs/builders/amazon.html#specifying-amazon-credentials)
## Dockerfiles
@ -368,8 +375,8 @@ etc. to provision your Docker container just like you would a regular
virtualized or dedicated machine.
While Docker has many features, Packer views Docker simply as an container
runner. To that end, Packer is able to repeatedly build these containers
using portable provisioning scripts.
runner. To that end, Packer is able to repeatedly build these containers using
portable provisioning scripts.
## Overriding the host directory

View File

@ -44,8 +44,8 @@ Any [communicator](/docs/templates/communicator.html) defined is ignored.
### Optional:
You can only define one of `source` or `content`. If none of them is
defined the artifact will be empty.
You can only define one of `source` or `content`. If none of them is defined
the artifact will be empty.
- `source` (string) - The path for a file which will be copied as the
artifact.

View File

@ -1,7 +1,7 @@
---
description: |
The googlecompute Packer builder is able to create images for use with
Google Cloud Compute Engine (GCE) based on existing images.
The googlecompute Packer builder is able to create images for use with Google
Cloud Compute Engine (GCE) based on existing images.
layout: docs
page_title: 'Google Compute - Builders'
sidebar_current: 'docs-builders-googlecompute'
@ -12,21 +12,23 @@ sidebar_current: 'docs-builders-googlecompute'
Type: `googlecompute`
The `googlecompute` Packer builder is able to create
[images](https://developers.google.com/compute/docs/images) for use with [Google
Compute Engine](https://cloud.google.com/products/compute-engine) (GCE) based on
existing images.
[images](https://developers.google.com/compute/docs/images) for use with
[Google Compute Engine](https://cloud.google.com/products/compute-engine) (GCE)
based on existing images.
It is possible to build images from scratch, but not with the `googlecompute` Packer builder.
The process is recommended only for advanced users, please see [Building GCE Images from Scratch]
(https://cloud.google.com/compute/docs/tutorials/building-images)
and the [Google Compute Import Post-Processor](/docs/post-processors/googlecompute-import.html)
for more information.
It is possible to build images from scratch, but not with the `googlecompute`
Packer builder. The process is recommended only for advanced users, please see
\[Building GCE Images from Scratch\]
(<https://cloud.google.com/compute/docs/tutorials/building-images>) and the
[Google Compute Import
Post-Processor](/docs/post-processors/googlecompute-import.html) for more
information.
## Authentication
Authenticating with Google Cloud services requires at most one JSON file, called
the *account file*. The *account file* is **not** required if you are running
the `googlecompute` Packer builder from a GCE instance with a
Authenticating with Google Cloud services requires at most one JSON file,
called the *account file*. The *account file* is **not** required if you are
running the `googlecompute` Packer builder from a GCE instance with a
properly-configured [Compute Engine Service
Account](https://cloud.google.com/compute/docs/authentication).
@ -72,10 +74,11 @@ straightforwarded, it is documented here.
3. Click the "Create credentials" button, select "Service account key"
4. Create a new service account that at least has `Compute Engine Instance Admin (v1)` and `Service Account User` roles.
4. Create a new service account that at least has
`Compute Engine Instance Admin (v1)` and `Service Account User` roles.
5. Choose `JSON` as the Key type and click "Create".
A JSON file will be downloaded automatically. This is your *account file*.
5. Choose `JSON` as the Key type and click "Create". A JSON file will be
downloaded automatically. This is your *account file*.
### Precedence of Authentication Methods
@ -85,10 +88,10 @@ location found:
1. An `account_file` option in your packer file.
2. A JSON file (Service Account) whose path is specified by the
`GOOGLE_APPLICATION_CREDENTIALS` environment variable.
`GOOGLE_APPLICATION_CREDENTIALS` environment variable.
3. A JSON file in a location known to the `gcloud` command-line tool.
(`gcloud` creates it when it's configured)
(`gcloud` creates it when it's configured)
On Windows, this is:
@ -99,8 +102,8 @@ location found:
$HOME/.config/gcloud/application_default_credentials.json
4. On Google Compute Engine and Google App Engine Managed VMs, it fetches
credentials from the metadata server. (Needs a correct VM authentication scope
configuration, see above.)
credentials from the metadata server. (Needs a correct VM authentication
scope configuration, see above.)
## Examples
@ -109,8 +112,8 @@ configuration, see above.)
Below is a fully functioning example. It doesn't do anything useful since no
provisioners or startup-script metadata are defined, but it will effectively
repackage an existing GCE image. The account\_file is obtained in the previous
section. If it parses as JSON it is assumed to be the file itself, otherwise, it
is assumed to be the path to the file containing the JSON.
section. If it parses as JSON it is assumed to be the file itself, otherwise,
it is assumed to be the path to the file containing the JSON.
``` json
{
@ -129,13 +132,14 @@ is assumed to be the path to the file containing the JSON.
### Windows Example
Before you can provision using the winrm communicator, you need to allow traffic
through google's firewall on the winrm port (tcp:5986).
You can do so using the gcloud command.
```
gcloud compute firewall-rules create allow-winrm --allow tcp:5986
```
Or alternatively by navigating to https://console.cloud.google.com/networking/firewalls/list.
Before you can provision using the winrm communicator, you need to allow
traffic through google's firewall on the winrm port (tcp:5986). You can do so
using the gcloud command.
gcloud compute firewall-rules create allow-winrm --allow tcp:5986
Or alternatively by navigating to
<https://console.cloud.google.com/networking/firewalls/list>.
Once this is set up, the following is a complete working packer config after
setting a valid `account_file` and `project_id`:
@ -162,12 +166,15 @@ setting a valid `account_file` and `project_id`:
]
}
```
This build can take up to 15 min.
### Nested Hypervisor Example
This is an example of using the `image_licenses` configuration option to create a GCE image that has nested virtualization enabled. See
[Enabling Nested Virtualization for VM Instances](https://cloud.google.com/compute/docs/instances/enable-nested-virtualization-vm-instances)
This is an example of using the `image_licenses` configuration option to create
a GCE image that has nested virtualization enabled. See [Enabling Nested
Virtualization for VM
Instances](https://cloud.google.com/compute/docs/instances/enable-nested-virtualization-vm-instances)
for details.
``` json
@ -198,8 +205,8 @@ builder.
### Required:
- `project_id` (string) - The project ID that will be used to launch instances
and store images.
- `project_id` (string) - The project ID that will be used to launch
instances and store images.
- `source_image` (string) - The source image to use to create the new image
from. You can also specify `source_image_family` instead. If both
@ -215,29 +222,35 @@ builder.
### Optional:
- `account_file` (string) - The JSON file containing your account credentials.
Not required if you run Packer on a GCE instance with a service account.
Instructions for creating the file or using service accounts are above.
- `account_file` (string) - The JSON file containing your account
credentials. Not required if you run Packer on a GCE instance with a
service account. Instructions for creating the file or using service
accounts are above.
- `accelerator_count` (number) - Number of guest accelerator cards to add to the launched instance.
- `accelerator_count` (number) - Number of guest accelerator cards to add to
the launched instance.
- `accelerator_type` (string) - Full or partial URL of the guest accelerator type. GPU accelerators can only be used with
`"on_host_maintenance": "TERMINATE"` option set.
Example: `"projects/project_id/zones/europe-west1-b/acceleratorTypes/nvidia-tesla-k80"`
- `accelerator_type` (string) - Full or partial URL of the guest accelerator
type. GPU accelerators can only be used with
`"on_host_maintenance": "TERMINATE"` option set. Example:
`"projects/project_id/zones/europe-west1-b/acceleratorTypes/nvidia-tesla-k80"`
- `address` (string) - The name of a pre-allocated static external IP address.
Note, must be the name and not the actual IP address.
- `address` (string) - The name of a pre-allocated static external IP
address. Note, must be the name and not the actual IP address.
- `disable_default_service_account` (bool) - If true, the default service account will not be used if `service_account_email`
is not specified. Set this value to true and omit `service_account_email` to provision a VM with no service account.
- `disable_default_service_account` (bool) - If true, the default service
account will not be used if `service_account_email` is not specified. Set
this value to true and omit `service_account_email` to provision a VM with
no service account.
- `disk_name` (string) - The name of the disk, if unset the instance name will be
used.
- `disk_name` (string) - The name of the disk, if unset the instance name
will be used.
- `disk_size` (number) - The size of the disk in GB. This defaults to `10`,
which is 10GB.
- `disk_type` (string) - Type of disk used to back your instance, like `pd-ssd` or `pd-standard`. Defaults to `pd-standard`.
- `disk_type` (string) - Type of disk used to back your instance, like
`pd-ssd` or `pd-standard`. Defaults to `pd-standard`.
- `image_description` (string) - The description of the resulting image.
@ -249,13 +262,14 @@ builder.
- `image_labels` (object of key/value strings) - Key/value pair labels to
apply to the created image.
- `image_licenses` (array of strings) - Licenses to apply to the created image.
- `image_licenses` (array of strings) - Licenses to apply to the created
image.
- `image_name` (string) - The unique name of the resulting image. Defaults to
`"packer-{{timestamp}}"`.
- `instance_name` (string) - A name to give the launched instance. Beware that
this must be unique. Defaults to `"packer-{{uuid}}"`.
- `instance_name` (string) - A name to give the launched instance. Beware
that this must be unique. Defaults to `"packer-{{uuid}}"`.
- `labels` (object of key/value strings) - Key/value pair labels to apply to
the launched instance.
@ -266,38 +280,40 @@ builder.
instance.
- `min_cpu_platform` (string) - A Minimum CPU Platform for VM Instance.
Availability and default CPU platforms vary across zones, based on
the hardware available in each GCP zone. [Details](https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform)
Availability and default CPU platforms vary across zones, based on the
hardware available in each GCP zone.
[Details](https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform)
- `network` (string) - The Google Compute network id or URL to use for the
launched instance. Defaults to `"default"`. If the value is not a URL, it
will be interpolated to `projects/((network_project_id))/global/networks/((network))`.
This value is not required if a `subnet` is specified.
will be interpolated to
`projects/((network_project_id))/global/networks/((network))`. This value
is not required if a `subnet` is specified.
- `network_project_id` (string) - The project ID for the network and
subnetwork to use for launched instance. Defaults to `project_id`.
- `network_project_id` (string) - The project ID for the network and subnetwork
to use for launched instance. Defaults to `project_id`.
- `omit_external_ip` (boolean) - If true, the instance will not have an external IP.
`use_internal_ip` must be true if this property is true.
- `omit_external_ip` (boolean) - If true, the instance will not have an
external IP. `use_internal_ip` must be true if this property is true.
- `on_host_maintenance` (string) - Sets Host Maintenance Option. Valid
choices are `MIGRATE` and `TERMINATE`. Please see [GCE Instance Scheduling
Options](https://cloud.google.com/compute/docs/instances/setting-instance-scheduling-options),
as not all machine\_types support `MIGRATE` (i.e. machines with GPUs).
If preemptible is true this can only be `TERMINATE`. If preemptible
is false, it defaults to `MIGRATE`
as not all machine\_types support `MIGRATE` (i.e. machines with GPUs). If
preemptible is true this can only be `TERMINATE`. If preemptible is false,
it defaults to `MIGRATE`
- `preemptible` (boolean) - If true, launch a preemptible instance.
- `region` (string) - The region in which to launch the instance. Defaults to
the region hosting the specified `zone`.
- `service_account_email` (string) - The service account to be used for launched instance. Defaults to
the project's default service account unless `disable_default_service_account` is true.
- `service_account_email` (string) - The service account to be used for
launched instance. Defaults to the project's default service account unless
`disable_default_service_account` is true.
- `scopes` (array of strings) - The service account scopes for launched instance.
Defaults to:
- `scopes` (array of strings) - The service account scopes for launched
instance. Defaults to:
``` json
[
@ -307,21 +323,21 @@ builder.
]
```
- `source_image_project_id` (string) - The project ID of the
project containing the source image.
- `source_image_project_id` (string) - The project ID of the project
containing the source image.
- `startup_script_file` (string) - The path to a startup script to run on
the VM from which the image will be made.
- `startup_script_file` (string) - The path to a startup script to run on the
VM from which the image will be made.
- `state_timeout` (string) - The time to wait for instance state changes.
Defaults to `"5m"`.
- `subnetwork` (string) - The Google Compute subnetwork id or URL to use for
the launched instance. Only required if the `network` has been created with
custom subnetting. Note, the region of the subnetwork must match the `region`
or `zone` in which the VM is launched. If the value is not a URL, it
will be interpolated to `projects/((network_project_id))/regions/((region))/subnetworks/((subnetwork))`
custom subnetting. Note, the region of the subnetwork must match the
`region` or `zone` in which the VM is launched. If the value is not a URL,
it will be interpolated to
`projects/((network_project_id))/regions/((region))/subnetworks/((subnetwork))`
- `tags` (array of strings) - Assign network tags to apply firewall rules to
VM instance.
@ -331,33 +347,36 @@ builder.
## Startup Scripts
Startup scripts can be a powerful tool for configuring the instance from which the image is made.
The builder will wait for a startup script to terminate. A startup script can be provided via the
`startup_script_file` or `startup-script` instance creation `metadata` field. Therefore, the build
time will vary depending on the duration of the startup script. If `startup_script_file` is set,
the `startup-script` `metadata` field will be overwritten. In other words, `startup_script_file`
takes precedence.
Startup scripts can be a powerful tool for configuring the instance from which
the image is made. The builder will wait for a startup script to terminate. A
startup script can be provided via the `startup_script_file` or
`startup-script` instance creation `metadata` field. Therefore, the build time
will vary depending on the duration of the startup script. If
`startup_script_file` is set, the `startup-script` `metadata` field will be
overwritten. In other words, `startup_script_file` takes precedence.
The builder does not check for a pass/fail/error signal from the startup script, at this time. Until
such support is implemented, startup scripts should be robust, as an image will still be built even
when a startup script fails.
The builder does not check for a pass/fail/error signal from the startup
script, at this time. Until such support is implemented, startup scripts should
be robust, as an image will still be built even when a startup script fails.
### Windows
A Windows startup script can only be provided via the `windows-startup-script-cmd` instance
creation `metadata` field. The builder will *not* wait for a Windows startup script to
terminate. You have to ensure that it finishes before the instance shuts down.
A Windows startup script can only be provided via the
`windows-startup-script-cmd` instance creation `metadata` field. The builder
will *not* wait for a Windows startup script to terminate. You have to ensure
that it finishes before the instance shuts down.
### Logging
Startup script logs can be copied to a Google Cloud Storage (GCS) location specified via the
`startup-script-log-dest` instance creation `metadata` field. The GCS location must be writeable by
the credentials provided in the builder config's `account_file`.
Startup script logs can be copied to a Google Cloud Storage (GCS) location
specified via the `startup-script-log-dest` instance creation `metadata` field.
The GCS location must be writeable by the credentials provided in the builder
config's `account_file`.
## Gotchas
CentOS and recent Debian images have root ssh access disabled by default. Set `ssh_username` to
any user, which will be created by packer with sudo access.
CentOS and recent Debian images have root ssh access disabled by default. Set
`ssh_username` to any user, which will be created by packer with sudo access.
The machine type must have a scratch disk, which means you can't use an
`f1-micro` or `g1-small` to build images.

View File

@ -14,11 +14,11 @@ sidebar_current: 'docs-builders-hetzner-cloud'
Type: `hcloud`
The `hcloud` Packer builder is able to create new images for use with
[Hetzner Cloud](https://www.hetzner.cloud). The builder takes a source image,
runs any provisioning necessary on the image after launching it, then snapshots
it into a reusable image. This reusable image can then be used as the foundation
of new servers that are launched within the Hetzner Cloud.
The `hcloud` Packer builder is able to create new images for use with [Hetzner
Cloud](https://www.hetzner.cloud). The builder takes a source image, runs any
provisioning necessary on the image after launching it, then snapshots it into
a reusable image. This reusable image can then be used as the foundation of new
servers that are launched within the Hetzner Cloud.
The builder does *not* manage images. Once it creates an image, it is up to you
to use it or delete it.
@ -35,15 +35,15 @@ builder.
### Required:
- `token` (string) - The client TOKEN to use to access your account. It
can also be specified via environment variable `HCLOUD_TOKEN`,
if set.
- `token` (string) - The client TOKEN to use to access your account. It can
also be specified via environment variable `HCLOUD_TOKEN`, if set.
- `image` (string) - ID or name of image to launch server from.
- `location` (string) - The name of the location to launch the server in.
- `server_type` (string) - ID or name of the server type this server should be created with.
- `server_type` (string) - ID or name of the server type this server should
be created with.
### Optional:
@ -58,7 +58,9 @@ builder.
appear in your account. Defaults to "packer-{{timestamp}}" (see
[configuration templates](/docs/templates/engine.html) for more info).
- `poll_interval` (string) - Configures the interval in which actions are polled by the client. Default `500ms`. Increase this interval if you run into rate limiting errors.
- `poll_interval` (string) - Configures the interval in which actions are
polled by the client. Default `500ms`. Increase this interval if you run
into rate limiting errors.
- `user_data` (string) - User data to launch with the server.

View File

@ -9,15 +9,16 @@ sidebar_current: 'docs-builders-hyperv'
# HyperV Builder
The HyperV Packer builder is able to create [Hyper-V](https://www.microsoft.com/en-us/server-cloud/solutions/virtualization.aspx)
The HyperV Packer builder is able to create
[Hyper-V](https://www.microsoft.com/en-us/server-cloud/solutions/virtualization.aspx)
virtual machines and export them.
- [hyperv-iso](/docs/builders/hyperv-iso.html) - Starts from
an ISO file, creates a brand new Hyper-V VM, installs an OS,
provisions software within the OS, then exports that machine to create
an image. This is best for people who want to start from scratch.
- [hyperv-iso](/docs/builders/hyperv-iso.html) - Starts from an ISO file,
creates a brand new Hyper-V VM, installs an OS, provisions software within
the OS, then exports that machine to create an image. This is best for
people who want to start from scratch.
- [hyperv-vmcx](/docs/builders/hyperv-vmcx.html) - Clones an
an existing virtual machine, provisions software within the OS,
then exports that machine to create an image. This is best for
people who have existing base images and want to customize them.
- [hyperv-vmcx](/docs/builders/hyperv-vmcx.html) - Clones an an existing
virtual machine, provisions software within the OS, then exports that
machine to create an image. This is best for people who have existing base
images and want to customize them.

View File

@ -10,9 +10,9 @@ sidebar_current: 'docs-builders'
# Builders
Builders are responsible for creating machines and generating images from them
for various platforms. For example, there are separate builders for EC2, VMware,
VirtualBox, etc. Packer comes with many builders by default, and can also be
extended to add new builders.
for various platforms. For example, there are separate builders for EC2,
VMware, VirtualBox, etc. Packer comes with many builders by default, and can
also be extended to add new builders.
To learn more about an individual builder, choose it from the sidebar. Each
builder has its own configuration options and parameters.

View File

@ -5,8 +5,8 @@ description: |
as a tar.gz of the root file system.
layout: docs
page_title: 'LXC - Builders'
sidebar_current: 'docs-builders-lxc`'
...
sidebar_current: 'docs-builders-lxc\`'
---
# LXC Builder
@ -19,12 +19,11 @@ as a tar.gz of the root file system.
The LXC builder requires a modern linux kernel and the `lxc` or `lxc1` package.
This builder does not work with LXD.
~&gt; Note: to build Centos images on a Debian family host, you will need the `yum`
package installed.
<br>Some provisioners such as `ansible-local` get confused when running in
a container of a different family. E.G. it will attempt to use `apt-get` to
install packages, when running in a Centos container if the parent OS is Debian
based.
~&gt; Note: to build Centos images on a Debian family host, you will need the
`yum` package installed. <br>Some provisioners such as `ansible-local` get
confused when running in a container of a different family. E.G. it will
attempt to use `apt-get` to install packages, when running in a Centos
container if the parent OS is Debian based.
## Basic Example
@ -78,50 +77,50 @@ Below is a fully functioning example.
### Required:
- `config_file` (string) - The path to the lxc configuration file.
- `config_file` (string) - The path to the lxc configuration file.
- `template_name` (string) - The LXC template name to use.
- `template_name` (string) - The LXC template name to use.
- `template_environment_vars` (array of strings) - Environmental variables to
use to build the template with.
- `template_environment_vars` (array of strings) - Environmental variables to
use to build the template with.
### Optional:
- `target_runlevel` (number) - The minimum run level to wait for the container to
reach. Note some distributions (Ubuntu) simulate run levels and may report
5 rather than 3.
- `target_runlevel` (number) - The minimum run level to wait for the
container to reach. Note some distributions (Ubuntu) simulate run levels
and may report 5 rather than 3.
- `output_directory` (string) - The directory in which to save the exported
tar.gz. Defaults to `output-<BuildName>` in the current directory.
- `output_directory` (string) - The directory in which to save the exported
tar.gz. Defaults to `output-<BuildName>` in the current directory.
- `container_name` (string) - The name of the LXC container. Usually stored in
`/var/lib/lxc/containers/<container_name>`. Defaults to
`packer-<BuildName>`.
- `container_name` (string) - The name of the LXC container. Usually stored
in `/var/lib/lxc/containers/<container_name>`. Defaults to
`packer-<BuildName>`.
- `command_wrapper` (string) - Allows you to specify a wrapper command, such
as `ssh` so you can execute packer builds on a remote host. Defaults to
Empty.
- `command_wrapper` (string) - Allows you to specify a wrapper command, such
as `ssh` so you can execute packer builds on a remote host. Defaults to
Empty.
- `init_timeout` (string) - The timeout in seconds to wait for the the
container to start. Defaults to 20 seconds.
- `init_timeout` (string) - The timeout in seconds to wait for the the
container to start. Defaults to 20 seconds.
- `template_parameters` (array of strings) - Options to pass to the given
`lxc-template` command, usually located in
`/usr/share/lxc/templates/lxc-<template_name>`. Note: This gets passed as
ARGV to the template command. Ensure you have an array of strings, as
a single string with spaces probably won't work. Defaults to `[]`.
- `template_parameters` (array of strings) - Options to pass to the given
`lxc-template` command, usually located in
`/usr/share/lxc/templates/lxc-<template_name>`. Note: This gets passed as
ARGV to the template command. Ensure you have an array of strings, as a
single string with spaces probably won't work. Defaults to `[]`.
- `create_options` (array of strings) - Options to pass to `lxc-create`. For
instance, you can specify a custom LXC container configuration file with
`["-f", "/path/to/lxc.conf"]`. Defaults to `[]`. See `man 1 lxc-create` for
available options.
- `create_options` (array of strings) - Options to pass to `lxc-create`. For
instance, you can specify a custom LXC container configuration file with
`["-f", "/path/to/lxc.conf"]`. Defaults to `[]`. See `man 1 lxc-create` for
available options.
- `start_options` (array of strings) - Options to pass to `lxc-start`. For
instance, you can override parameters from the LXC container configuration
file via `["--define", "KEY=VALUE"]`. Defaults to `[]`. See `man 1
lxc-start` for available options.
- `start_options` (array of strings) - Options to pass to `lxc-start`. For
instance, you can override parameters from the LXC container configuration
file via `["--define", "KEY=VALUE"]`. Defaults to `[]`. See
`man 1 lxc-start` for available options.
- `attach_options` (array of strings) - Options to pass to `lxc-attach`. For
instance, you can prevent the container from inheriting the host machine's
environment by specifying `["--clear-env"]`. Defaults to `[]`. See `man 1
lxc-attach` for available options.
- `attach_options` (array of strings) - Options to pass to `lxc-attach`. For
instance, you can prevent the container from inheriting the host machine's
environment by specifying `["--clear-env"]`. Defaults to `[]`. See
`man 1 lxc-attach` for available options.

View File

@ -1,23 +1,23 @@
---
description: |
The `lxd` Packer builder builds containers for LXD. The builder starts an LXD
container, runs provisioners within this container, then saves the container
as an LXD image.
container, runs provisioners within this container, then saves the container as
an LXD image.
layout: docs
page_title: 'LXD - Builders'
sidebar_current: 'docs-builders-lxd'
...
---
# LXD Builder
Type: `lxd`
The `lxd` Packer builder builds containers for LXD. The builder starts an LXD
container, runs provisioners within this container, then saves the container
as an LXD image.
container, runs provisioners within this container, then saves the container as
an LXD image.
The LXD builder requires a modern linux kernel and the `lxd` package.
This builder does not work with LXC.
The LXD builder requires a modern linux kernel and the `lxd` package. This
builder does not work with LXC.
## Basic Example
@ -39,38 +39,37 @@ Below is a fully functioning example.
}
```
## Configuration Reference
### Required:
- `image` (string) - The source image to use when creating the build
container. This can be a (local or remote) image (name or fingerprint). E.G.
`my-base-image`, `ubuntu-daily:x`, `08fababf6f27`, ...
- `image` (string) - The source image to use when creating the build
container. This can be a (local or remote) image (name or fingerprint).
E.G. `my-base-image`, `ubuntu-daily:x`, `08fababf6f27`, ...
~&gt; Note: The builder may appear to pause if required to download
a remote image, as they are usually 100-200MB. `/var/log/lxd/lxd.log` will
~&gt; Note: The builder may appear to pause if required to download a
remote image, as they are usually 100-200MB. `/var/log/lxd/lxd.log` will
mention starting such downloads.
### Optional:
- `init_sleep` (string) - The number of seconds to sleep between launching the
LXD instance and provisioning it; defaults to 3 seconds.
- `init_sleep` (string) - The number of seconds to sleep between launching
the LXD instance and provisioning it; defaults to 3 seconds.
- `name` (string) - The name of the started container. Defaults to
`packer-$PACKER_BUILD_NAME`.
- `name` (string) - The name of the started container. Defaults to
`packer-$PACKER_BUILD_NAME`.
- `output_image` (string) - The name of the output artifact. Defaults to
`name`.
- `output_image` (string) - The name of the output artifact. Defaults to
`name`.
- `command_wrapper` (string) - Lets you prefix all builder commands, such as
with `ssh` for a remote build host. Defaults to `""`.
- `command_wrapper` (string) - Lets you prefix all builder commands, such as
with `ssh` for a remote build host. Defaults to `""`.
- `publish_properties` (map[string]string) - Pass key values to the publish
step to be set as properties on the output image. This is most helpful to
set the description, but can be used to set anything needed.
See https://stgraber.org/2016/03/30/lxd-2-0-image-management-512/
for more properties.
- `publish_properties` (map\[string\]string) - Pass key values to the publish
step to be set as properties on the output image. This is most helpful to
set the description, but can be used to set anything needed. See
<https://stgraber.org/2016/03/30/lxd-2-0-image-management-512/> for more
properties.
- `launch_config` (map[string]string) - List of key/value pairs you wish to
pass to `lxc launch` via `--config`. Defaults to empty.
- `launch_config` (map\[string\]string) - List of key/value pairs you wish to
pass to `lxc launch` via `--config`. Defaults to empty.

View File

@ -1,6 +1,7 @@
---
description: |
The ncloud builder allows you to create server images using the NAVER Cloud Platform.
The ncloud builder allows you to create server images using the NAVER Cloud
Platform.
layout: docs
page_title: 'Naver Cloud Platform - Builders'
sidebar_current: 'docs-builders-ncloud'
@ -16,12 +17,12 @@ Platform](https://www.ncloud.com/).
### Required:
- `ncloud_access_key` (string) - User's access key. Go to [\[Account
Management \> Authentication
Management &gt; Authentication
Key\]](https://www.ncloud.com/mypage/manage/authkey) to create and view
your authentication key.
- `ncloud_secret_key` (string) - User's secret key paired with the access
key. Go to [\[Account Management \> Authentication
key. Go to [\[Account Management &gt; Authentication
Key\]](https://www.ncloud.com/mypage/manage/authkey) to create and view
your authentication key.
@ -61,66 +62,61 @@ Platform](https://www.ncloud.com/).
(default: Korea)
- values: Korea / US-West / HongKong / Singapore / Japan / Germany
## Sample code of template.json
```
{
"variables": {
"ncloud_access_key": "FRxhOQRNjKVMqIz3sRLY",
"ncloud_secret_key": "xd6kTO5iNcLookBx0D8TDKmpLj2ikxqEhc06MQD2"
},
"builders": [
{
"type": "ncloud",
"access_key": "{{user `ncloud_access_key`}}",
"secret_key": "{{user `ncloud_secret_key`}}",
"variables": {
"ncloud_access_key": "FRxhOQRNjKVMqIz3sRLY",
"ncloud_secret_key": "xd6kTO5iNcLookBx0D8TDKmpLj2ikxqEhc06MQD2"
},
"builders": [
{
"type": "ncloud",
"access_key": "{{user `ncloud_access_key`}}",
"secret_key": "{{user `ncloud_secret_key`}}",
"server_image_product_code": "SPSW0WINNT000016",
"server_product_code": "SPSVRSSD00000011",
"member_server_image_no": "4223",
"server_image_name": "packer-test {{timestamp}}",
"server_description": "server description",
"user_data": "CreateObject(\"WScript.Shell\").run(\"cmd.exe /c powershell Set-ExecutionPolicy RemoteSigned & winrm quickconfig -q & sc config WinRM start= auto & winrm set winrm/config/service/auth @{Basic=\"\"true\"\"} & winrm set winrm/config/service @{AllowUnencrypted=\"\"true\"\"} & winrm get winrm/config/service\")",
"region": "US-West"
"server_image_product_code": "SPSW0WINNT000016",
"server_product_code": "SPSVRSSD00000011",
"member_server_image_no": "4223",
"server_image_name": "packer-test {{timestamp}}",
"server_description": "server description",
"user_data": "CreateObject(\"WScript.Shell\").run(\"cmd.exe /c powershell Set-ExecutionPolicy RemoteSigned & winrm quickconfig -q & sc config WinRM start= auto & winrm set winrm/config/service/auth @{Basic=\"\"true\"\"} & winrm set winrm/config/service @{AllowUnencrypted=\"\"true\"\"} & winrm get winrm/config/service\")",
"region": "US-West"
}
]
}
]
}
```
## Requirements for creating Windows images
You should include the following code in the packer configuration file for
provision when creating a Windows server.
```
"builders": [
{
"type": "ncloud",
...
"user_data":
"CreateObject(\"WScript.Shell\").run(\"cmd.exe /c powershell Set-ExecutionPolicy RemoteSigned & winrm set winrm/config/service/auth @{Basic=\"\"true\"\"} & winrm set winrm/config/service @{AllowUnencrypted=\"\"true\"\"} & winrm quickconfig -q & sc config WinRM start= auto & winrm get winrm/config/service\")",
"communicator": "winrm",
"winrm_username": "Administrator"
}
],
"provisioners": [
{
"type": "powershell",
"inline": [
"$Env:SystemRoot\\System32\\Sysprep\\Sysprep.exe /oobe /generalize /shutdown /quiet \"/unattend:C:\\Program Files (x86)\\NBP\\nserver64.xml\" "
"builders": [
{
"type": "ncloud",
...
"user_data":
"CreateObject(\"WScript.Shell\").run(\"cmd.exe /c powershell Set-ExecutionPolicy RemoteSigned & winrm set winrm/config/service/auth @{Basic=\"\"true\"\"} & winrm set winrm/config/service @{AllowUnencrypted=\"\"true\"\"} & winrm quickconfig -q & sc config WinRM start= auto & winrm get winrm/config/service\")",
"communicator": "winrm",
"winrm_username": "Administrator"
}
],
"provisioners": [
{
"type": "powershell",
"inline": [
"$Env:SystemRoot\\System32\\Sysprep\\Sysprep.exe /oobe /generalize /shutdown /quiet \"/unattend:C:\\Program Files (x86)\\NBP\\nserver64.xml\" "
]
}
]
}
]
```
## Note
* You can only create as many public IP addresses as the number of server
instances you own. Before running Packer, please make sure that the number of
public IP addresses previously created is not larger than the number of
server instances (including those to be used to create server images).
* When you forcibly terminate the packer process or close the terminal
(command) window where the process is running, the resources may not be
cleaned up as the packer process no longer runs. In this case, you should
manually clean up the resources associated with the process.
- You can only create as many public IP addresses as the number of server
instances you own. Before running Packer, please make sure that the number
of public IP addresses previously created is not larger than the number of
server instances (including those to be used to create server images).
- When you forcibly terminate the packer process or close the terminal
(command) window where the process is running, the resources may not be
cleaned up as the packer process no longer runs. In this case, you should
manually clean up the resources associated with the process.

View File

@ -9,7 +9,8 @@ sidebar_current: 'docs-builders-oneandone'
Type: `oneandone`
The 1&1 Builder is able to create virtual machines for [1&1](https://www.1and1.com/).
The 1&1 Builder is able to create virtual machines for
[1&1](https://www.1and1.com/).
## Configuration Reference
@ -25,19 +26,25 @@ builder.
- `source_image_name` (string) - 1&1 Server Appliance name of type `IMAGE`.
- `token` (string) - 1&1 REST API Token. This can be specified via environment variable `ONEANDONE_TOKEN`
- `token` (string) - 1&1 REST API Token. This can be specified via
environment variable `ONEANDONE_TOKEN`
### Optional
- `data_center_name` - Name of virtual data center. Possible values "ES", "US", "GB", "DE". Default value "US"
- `data_center_name` - Name of virtual data center. Possible values "ES",
"US", "GB", "DE". Default value "US"
- `disk_size` (string) - Amount of disk space for this image in GB. Defaults to "50"
- `disk_size` (string) - Amount of disk space for this image in GB. Defaults
to "50"
- `image_name` (string) - Resulting image. If "image\_name" is not provided Packer will generate it
- `image_name` (string) - Resulting image. If "image\_name" is not provided
Packer will generate it
- `retries` (number) - Number of retries Packer will make status requests while waiting for the build to complete. Default value "600".
- `retries` (number) - Number of retries Packer will make status requests
while waiting for the build to complete. Default value "600".
- `url` (string) - Endpoint for the 1&1 REST API. Default URL "<https://cloudpanel-api.1and1.com/v1>"
- `url` (string) - Endpoint for the 1&1 REST API. Default URL
"<https://cloudpanel-api.1and1.com/v1>"
## Example

View File

@ -1,10 +1,10 @@
---
description: |
The openstack Packer builder is able to create new images for use with
OpenStack. The builder takes a source image, runs any provisioning necessary
on the image after launching it, then creates a new reusable image. This
reusable image can then be used as the foundation of new servers that are
launched within OpenStack.
OpenStack. The builder takes a source image, runs any provisioning necessary on
the image after launching it, then creates a new reusable image. This reusable
image can then be used as the foundation of new servers that are launched
within OpenStack.
layout: docs
page_title: 'OpenStack - Builders'
sidebar_current: 'docs-builders-openstack'
@ -33,12 +33,11 @@ builder with OpenStack Liberty (Oct 2015) or later you need to have OpenSSL
installed *if you are using temporary key pairs*, i.e. don't use
[`ssh_keypair_name`](openstack.html#ssh_keypair_name) nor
[`ssh_password`](/docs/templates/communicator.html#ssh_password). All major
OS'es have OpenSSL installed by default except Windows. This have been
resolved in OpenStack Ocata(Feb 2017).
OS'es have OpenSSL installed by default except Windows. This have been resolved
in OpenStack Ocata(Feb 2017).
~&gt; **Note:** OpenStack Block Storage volume support is available only for
V3 Block Storage API. It's available in OpenStack since Mitaka release
(Apr 2016).
~&gt; **Note:** OpenStack Block Storage volume support is available only for V3
Block Storage API. It's available in OpenStack since Mitaka release (Apr 2016).
## Configuration Reference
@ -52,8 +51,8 @@ builder.
### Required:
- `flavor` (string) - The ID, name, or full URL for the desired flavor for the
server to be created.
- `flavor` (string) - The ID, name, or full URL for the desired flavor for
the server to be created.
- `image_name` (string) - The name of the resulting image.
@ -66,9 +65,9 @@ builder.
Unless you specify completely custom SSH settings, the source image must
have `cloud-init` installed so that the keypair gets assigned properly.
- `source_image_name` (string) - The name of the base image to use. This
is an alternative way of providing `source_image` and only either of them
can be specified.
- `source_image_name` (string) - The name of the base image to use. This is
an alternative way of providing `source_image` and only either of them can
be specified.
- `source_image_filter` (map) - The search filters for determining the base
image to use. This is an alternative way of providing `source_image` and
@ -80,23 +79,22 @@ builder.
variable `OS_USERNAME` or `OS_USERID`, if set. This is not required if
using access token instead of password or if using `cloud.yaml`.
- `password` (string) - The password used to connect to the OpenStack service.
If not specified, Packer will use the environment variables `OS_PASSWORD`,
if set. This is not required if using access token instead of password or
if using `cloud.yaml`.
- `password` (string) - The password used to connect to the OpenStack
service. If not specified, Packer will use the environment variables
`OS_PASSWORD`, if set. This is not required if using access token instead
of password or if using `cloud.yaml`.
### Optional:
- `availability_zone` (string) - The availability zone to launch the
server in. If this isn't specified, the default enforced by your OpenStack
cluster will be used. This may be required for some OpenStack clusters.
- `availability_zone` (string) - The availability zone to launch the server
in. If this isn't specified, the default enforced by your OpenStack cluster
will be used. This may be required for some OpenStack clusters.
- `cacert` (string) - Custom CA certificate file path.
If omitted the `OS_CACERT` environment variable can be used.
- `cacert` (string) - Custom CA certificate file path. If omitted the
`OS_CACERT` environment variable can be used.
- `cert` (string) - Client certificate file path for SSL client authentication.
If omitted the `OS_CERT` environment variable can be used.
- `cert` (string) - Client certificate file path for SSL client
authentication. If omitted the `OS_CERT` environment variable can be used.
- `cloud` (string) - An entry in a `clouds.yaml` file. See the OpenStack
os-client-config
@ -108,12 +106,13 @@ builder.
cloud-init metadata.
- `domain_name` or `domain_id` (string) - The Domain name or ID you are
authenticating with. OpenStack installations require this if identity v3 is used.
Packer will use the environment variable `OS_DOMAIN_NAME` or `OS_DOMAIN_ID`, if set.
authenticating with. OpenStack installations require this if identity v3 is
used. Packer will use the environment variable `OS_DOMAIN_NAME` or
`OS_DOMAIN_ID`, if set.
- `endpoint_type` (string) - The endpoint type to use. Can be any of "internal",
"internalURL", "admin", "adminURL", "public", and "publicURL". By default
this is "public".
- `endpoint_type` (string) - The endpoint type to use. Can be any of
"internal", "internalURL", "admin", "adminURL", "public", and "publicURL".
By default this is "public".
- `floating_ip` (string) - A specific floating IP to assign to this instance.
@ -133,14 +132,15 @@ builder.
- `insecure` (boolean) - Whether or not the connection to OpenStack can be
done over an insecure connection. By default this is false.
- `key` (string) - Client private key file path for SSL client authentication.
If omitted the `OS_KEY` environment variable can be used.
- `key` (string) - Client private key file path for SSL client
authentication. If omitted the `OS_KEY` environment variable can be used.
- `metadata` (object of key/value strings) - Glance metadata that will be
applied to the image.
- `instance_name` (string) - Name that is applied to the server instance
created by Packer. If this isn't specified, the default is same as `image_name`.
created by Packer. If this isn't specified, the default is same as
`image_name`.
- `instance_metadata` (object of key/value strings) - Metadata that is
applied to the server instance created by Packer. Also called server
@ -150,16 +150,16 @@ builder.
- `networks` (array of strings) - A list of networks by UUID to attach to
this instance.
- `ports` (array of strings) - A list of ports by UUID to attach to
this instance.
- `ports` (array of strings) - A list of ports by UUID to attach to this
instance.
- `rackconnect_wait` (boolean) - For rackspace, whether or not to wait for
Rackconnect to assign the machine an IP address before connecting via SSH.
Defaults to false.
- `region` (string) - The name of the region, such as "DFW", in which to
launch the server to create the image. If not specified, Packer will use the
environment variable `OS_REGION_NAME`, if set.
launch the server to create the image. If not specified, Packer will use
the environment variable `OS_REGION_NAME`, if set.
- `reuse_ips` (boolean) - Whether or not to attempt to reuse existing
unassigned floating ips in the project before allocating a new one. Note
@ -188,39 +188,41 @@ builder.
}
```
This selects the most recent production Ubuntu 16.04 shared to you by the given owner.
NOTE: This will fail unless *exactly* one image is returned, or `most_recent` is set to true.
In the example of multiple returned images, `most_recent` will cause this to succeed by selecting
the newest image of the returned images.
This selects the most recent production Ubuntu 16.04 shared to you by the
given owner. NOTE: This will fail unless *exactly* one image is returned,
or `most_recent` is set to true. In the example of multiple returned
images, `most_recent` will cause this to succeed by selecting the newest
image of the returned images.
- `filters` (map of strings) - filters used to select a `source_image`.
NOTE: This will fail unless *exactly* one image is returned, or `most_recent` is set to true.
Of the filters described in [ImageService](https://developer.openstack.org/api-ref/image/v2/), the following
are valid:
NOTE: This will fail unless *exactly* one image is returned, or
`most_recent` is set to true. Of the filters described in
[ImageService](https://developer.openstack.org/api-ref/image/v2/), the
following are valid:
- name (string)
- name (string)
- owner (string)
- owner (string)
- tags (array of strings)
- tags (array of strings)
- visibility (string)
- visibility (string)
- `most_recent` (boolean) - Selects the newest created image when true.
This is most useful for selecting a daily distro build.
You may set use this in place of `source_image` If `source_image_filter` is provided
alongside `source_image`, the `source_image` will override the filter. The filter
will not be used in this case.
You may set use this in place of `source_image` If `source_image_filter` is
provided alongside `source_image`, the `source_image` will override the
filter. The filter will not be used in this case.
- `ssh_interface` (string) - The type of interface to connect via SSH. Values
useful for Rackspace are "public" or "private", and the default behavior is
to connect via whichever is returned first from the OpenStack API.
- `ssh_ip_version` (string) - The IP version to use for SSH connections, valid
values are `4` and `6`. Useful on dual stacked instances where the default
behavior is to connect via whichever IP address is returned first from the
OpenStack API.
- `ssh_ip_version` (string) - The IP version to use for SSH connections,
valid values are `4` and `6`. Useful on dual stacked instances where the
default behavior is to connect via whichever IP address is returned first
from the OpenStack API.
- `ssh_keypair_name` (string) - If specified, this is the key that will be
used for SSH with the machine. By default, this is blank, and Packer will
@ -231,26 +233,27 @@ builder.
- `ssh_agent_auth` (boolean) - If true, the local SSH agent will be used to
authenticate connections to the source instance. No temporary keypair will
be created, and the values of `ssh_password` and `ssh_private_key_file` will
be ignored. To use this option with a key pair already configured in the source
image, leave the `ssh_keypair_name` blank. To associate an existing key pair
with the source instance, set the `ssh_keypair_name` field to the name
of the key pair.
be created, and the values of `ssh_password` and `ssh_private_key_file`
will be ignored. To use this option with a key pair already configured in
the source image, leave the `ssh_keypair_name` blank. To associate an
existing key pair with the source instance, set the `ssh_keypair_name`
field to the name of the key pair.
- `temporary_key_pair_name` (string) - The name of the temporary key pair
to generate. By default, Packer generates a name that looks like
- `temporary_key_pair_name` (string) - The name of the temporary key pair to
generate. By default, Packer generates a name that looks like
`packer_<UUID>`, where &lt;UUID&gt; is a 36 character unique identifier.
- `tenant_id` or `tenant_name` (string) - The tenant ID or name to boot the
instance into. Some OpenStack installations require this. If not specified,
Packer will use the environment variable `OS_TENANT_NAME` or `OS_TENANT_ID`,
if set. Tenant is also called Project in later versions of OpenStack.
Packer will use the environment variable `OS_TENANT_NAME` or
`OS_TENANT_ID`, if set. Tenant is also called Project in later versions of
OpenStack.
- `token` (string) - the token (id) to use with token based authorization.
Packer will use the environment variable `OS_TOKEN`, if set.
- `use_floating_ip` (boolean) - *Deprecated* use `floating_ip` or `floating_ip_pool`
instead.
- `use_floating_ip` (boolean) - *Deprecated* use `floating_ip` or
`floating_ip_pool` instead.
- `user_data` (string) - User data to apply when launching the instance. Note
that you need to be careful about escaping characters due to the templates
@ -275,12 +278,13 @@ builder.
zones aren't specified, the default enforced by your OpenStack cluster will
be used.
- `image_disk_format` (string) - Disk format of the resulting image.
This option works if `use_blockstorage_volume` is true.
- `image_disk_format` (string) - Disk format of the resulting image. This
option works if `use_blockstorage_volume` is true.
## Basic Example: DevStack
Here is a basic example. This is a example to build on DevStack running in a VM.
Here is a basic example. This is a example to build on DevStack running in a
VM.
``` json
{
@ -350,9 +354,9 @@ This is slightly different when identity v3 is used:
- `OS_DOMAIN_NAME`
- `OS_TENANT_NAME`
This will authenticate the user on the domain and scope you to the project.
A tenant is the same as a project. It's optional to use names or IDs in v3.
This means you can use `OS_USERNAME` or `OS_USERID`, `OS_TENANT_ID` or
This will authenticate the user on the domain and scope you to the project. A
tenant is the same as a project. It's optional to use names or IDs in v3. This
means you can use `OS_USERNAME` or `OS_USERID`, `OS_TENANT_ID` or
`OS_TENANT_NAME` and `OS_DOMAIN_ID` or `OS_DOMAIN_NAME`.
The above example would be equivalent to an RC file looking like this :
@ -395,9 +399,10 @@ by Selectel VPC.
The simplest way to get all settings for authorization against OpenStack is to
go into the OpenStack Dashboard (Horizon) select your *Project* and navigate
*Project, Access & Security*, select *API Access* and *Download OpenStack RC
File v3*. Source the file, and select your wanted region
by setting environment variable `OS_REGION_NAME` or `OS_REGION_ID` and
`export OS_TENANT_NAME=$OS_PROJECT_NAME` or `export OS_TENANT_ID=$OS_PROJECT_ID`.
File v3*. Source the file, and select your wanted region by setting environment
variable `OS_REGION_NAME` or `OS_REGION_ID` and
`export OS_TENANT_NAME=$OS_PROJECT_NAME` or
`export OS_TENANT_ID=$OS_PROJECT_ID`.
~&gt; `OS_TENANT_NAME` or `OS_TENANT_ID` must be used even with Identity v3,
`OS_PROJECT_NAME` and `OS_PROJECT_ID` has no effect in Packer.
@ -409,9 +414,9 @@ OpenStack cli. It can be installed with
### Authorize Using Tokens
To authorize with a access token only `identity_endpoint` and `token` is needed,
and possibly `tenant_name` or `tenant_id` depending on your token type. Or use
the following environment variables:
To authorize with a access token only `identity_endpoint` and `token` is
needed, and possibly `tenant_name` or `tenant_id` depending on your token type.
Or use the following environment variables:
- `OS_AUTH_URL`
- `OS_TOKEN`

View File

@ -1,7 +1,7 @@
---
description: |
The oracle-oci builder is able to create new custom images for use with Oracle
Cloud Infrastructure (OCI).
The oracle-oci builder is able to create new custom images for use with Oracle
Cloud Infrastructure (OCI).
layout: docs
page_title: 'Oracle OCI - Builders'
sidebar_current: 'docs-builders-oracle-oci'
@ -16,42 +16,45 @@ with [Oracle Cloud Infrastructure](https://cloud.oracle.com) (OCI). The builder
takes a base image, runs any provisioning necessary on the base image after
launching it, and finally snapshots it creating a reusable custom image.
It is recommended that you familiarise yourself with the
[Key Concepts and Terminology](https://docs.us-phoenix-1.oraclecloud.com/Content/GSG/Concepts/concepts.htm)
It is recommended that you familiarise yourself with the [Key Concepts and
Terminology](https://docs.us-phoenix-1.oraclecloud.com/Content/GSG/Concepts/concepts.htm)
prior to using this builder if you have not done so already.
The builder _does not_ manage images. Once it creates an image, it is up to you
The builder *does not* manage images. Once it creates an image, it is up to you
to use it or delete it.
## Authorization
The Oracle OCI API requires that requests be signed with the RSA public key
associated with your [IAM](https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/overview.htm)
associated with your
[IAM](https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/overview.htm)
user account. For a comprehensive example of how to configure the required
authentication see the documentation on
[Required Keys and OCIDs](https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm)
([Oracle Cloud IDs](https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm)).
authentication see the documentation on [Required Keys and
OCIDs](https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm)
([Oracle Cloud
IDs](https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm)).
## Configuration Reference
There are many configuration options available for the `oracle-oci` builder.
In addition to the options listed here, a
There are many configuration options available for the `oracle-oci` builder. In
addition to the options listed here, a
[communicator](/docs/templates/communicator.html) can be configured for this
builder.
### Required
- `availability_domain` (string) - The name of the
[Availability Domain](https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/regions.htm)
within which a new instance is launched and provisioned.
The names of the Availability Domains have a prefix that is specific to
your [tenancy](https://docs.us-phoenix-1.oraclecloud.com/Content/GSG/Concepts/concepts.htm#two).
- `availability_domain` (string) - The name of the [Availability
Domain](https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/regions.htm)
within which a new instance is launched and provisioned. The names of the
Availability Domains have a prefix that is specific to your
[tenancy](https://docs.us-phoenix-1.oraclecloud.com/Content/GSG/Concepts/concepts.htm#two).
To get a list of the Availability Domains, use the
[ListAvailabilityDomains](https://docs.us-phoenix-1.oraclecloud.com/api/#/en/identity/latest/AvailabilityDomain/ListAvailabilityDomains)
operation, which is available in the IAM Service API.
- `base_image_ocid` (string) - The OCID of the [base image](https://docs.us-phoenix-1.oraclecloud.com/Content/Compute/References/images.htm)
- `base_image_ocid` (string) - The OCID of the [base
image](https://docs.us-phoenix-1.oraclecloud.com/Content/Compute/References/images.htm)
to use. This is the unique identifier of the image that will be used to
launch a new instance and provision it.
@ -59,23 +62,22 @@ builder.
[ListImages](https://docs.us-phoenix-1.oraclecloud.com/api/#/en/iaas/latest/Image/ListImages)
operation available in the Core Services API.
- `compartment_ocid` (string) - The OCID of the
- `compartment_ocid` (string) - The OCID of the
[compartment](https://docs.us-phoenix-1.oraclecloud.com/Content/GSG/Tasks/choosingcompartments.htm)
- `fingerprint` (string) - Fingerprint for the OCI API signing key.
Overrides value provided by the
[OCI config file](https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/sdkconfig.htm)
- `fingerprint` (string) - Fingerprint for the OCI API signing key. Overrides
value provided by the [OCI config
file](https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/sdkconfig.htm)
if present.
- `shape` (string) - The template that determines the number of
CPUs, amount of memory, and other resources allocated to a newly created
instance.
- `shape` (string) - The template that determines the number of CPUs, amount
of memory, and other resources allocated to a newly created instance.
To get a list of the available shapes, use the
[ListShapes](https://docs.us-phoenix-1.oraclecloud.com/api/#/en/iaas/20160918/Shape/ListShapes)
operation available in the Core Services API.
- `subnet_ocid` (string) - The name of the subnet within which a new instance
- `subnet_ocid` (string) - The name of the subnet within which a new instance
is launched and provisioned.
To get a list of your subnets, use the
@ -86,59 +88,68 @@ builder.
[communicator](/docs/templates/communicator.html) (communicator defaults to
[SSH tcp/22](/docs/templates/communicator.html#ssh_port)).
### Optional
- `access_cfg_file` (string) - The path to the
[OCI config file](https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/sdkconfig.htm).
- `access_cfg_file` (string) - The path to the [OCI config
file](https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/sdkconfig.htm).
Defaults to `$HOME/.oci/config`.
- `access_cfg_file_account` (string) - The specific account in the
[OCI config file](https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/sdkconfig.htm)
- `access_cfg_file_account` (string) - The specific account in the [OCI
config
file](https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/sdkconfig.htm)
to use. Defaults to `DEFAULT`.
- `image_name` (string) - The name to assign to the resulting custom image.
- `image_name` (string) - The name to assign to the resulting custom image.
- `key_file` (string) - Full path and filename of the OCI API signing key.
Overrides value provided by the
[OCI config file](https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/sdkconfig.htm)
- `key_file` (string) - Full path and filename of the OCI API signing key.
Overrides value provided by the [OCI config
file](https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/sdkconfig.htm)
if present.
- `pass_phrase` (string) - Pass phrase used to decrypt the OCI API signing
key. Overrides value provided by the
[OCI config file](https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/sdkconfig.htm)
- `pass_phrase` (string) - Pass phrase used to decrypt the OCI API signing
key. Overrides value provided by the [OCI config
file](https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/sdkconfig.htm)
if present.
- `region` (string) - An Oracle Cloud Infrastructure region. Overrides
value provided by the
[OCI config file](https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/sdkconfig.htm)
- `region` (string) - An Oracle Cloud Infrastructure region. Overrides value
provided by the [OCI config
file](https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/sdkconfig.htm)
if present.
- `tenancy_ocid` (string) - The OCID of your tenancy. Overrides value provided
by the
[OCI config file](https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/sdkconfig.htm)
- `tenancy_ocid` (string) - The OCID of your tenancy. Overrides value
provided by the [OCI config
file](https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/sdkconfig.htm)
if present.
- `user_ocid` (string) - The OCID of the user calling the OCI API. Overrides
value provided by the [OCI config file](https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/sdkconfig.htm)
- `user_ocid` (string) - The OCID of the user calling the OCI API. Overrides
value provided by the [OCI config
file](https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/sdkconfig.htm)
if present.
- `use_private_ip` (boolean) - Use private ip addresses to connect to the instance via ssh.
- `use_private_ip` (boolean) - Use private ip addresses to connect to the
instance via ssh.
- `metadata` (map of strings) - Metadata optionally contains custom metadata key/value pairs provided in the
configuration. While this can be used to set metadata["user_data"] the explicit "user_data" and "user_data_file" values will have precedence. An instance's metadata can be obtained from at http://169.254.169.254 on the
launched instance.
- `metadata` (map of strings) - Metadata optionally contains custom metadata
key/value pairs provided in the configuration. While this can be used to
set metadata\["user\_data"\] the explicit "user\_data" and
"user\_data\_file" values will have precedence. An instance's metadata can
be obtained from at <http://169.254.169.254> on the launched instance.
- `user_data` (string) - user_data to be used by cloud
init. See [the Oracle docs](https://docs.us-phoenix-1.oraclecloud.com/api/#/en/iaas/20160918/LaunchInstanceDetails) for more details. Generally speaking, it is easier to use the `user_data_file`,
but you can use this option to put either the plaintext data or the base64
encoded data directly into your Packer config.
- `user_data` (string) - user\_data to be used by cloud init. See [the Oracle
docs](https://docs.us-phoenix-1.oraclecloud.com/api/#/en/iaas/20160918/LaunchInstanceDetails)
for more details. Generally speaking, it is easier to use the
`user_data_file`, but you can use this option to put either the plaintext
data or the base64 encoded data directly into your Packer config.
- `user_data_file` (string) - Path to a file to be used as user_data by cloud
init. See [the Oracle docs](https://docs.us-phoenix-1.oraclecloud.com/api/#/en/iaas/20160918/LaunchInstanceDetails) for more details. Example:
`"user_data_file": "./boot_config/myscript.sh"`
- `user_data_file` (string) - Path to a file to be used as user\_data by
cloud init. See [the Oracle
docs](https://docs.us-phoenix-1.oraclecloud.com/api/#/en/iaas/20160918/LaunchInstanceDetails)
for more details. Example: `"user_data_file": "./boot_config/myscript.sh"`
- `tags` (map of strings) - Add one or more freeform tags to the resulting custom image. See [the Oracle docs](https://docs.cloud.oracle.com/iaas/Content/Identity/Concepts/taggingoverview.htm) for more details. Example:
- `tags` (map of strings) - Add one or more freeform tags to the resulting
custom image. See [the Oracle
docs](https://docs.cloud.oracle.com/iaas/Content/Identity/Concepts/taggingoverview.htm)
for more details. Example:
``` {.yaml}
"tags":

View File

@ -1,6 +1,5 @@
---
description: |
Packer is able to create custom images using Oracle Cloud Infrastructure.
description: 'Packer is able to create custom images using Oracle Cloud Infrastructure.'
layout: docs
page_title: 'Oracle - Builders'
sidebar_current: 'docs-builders-oracle'

View File

@ -17,16 +17,16 @@ Packer actually comes with multiple builders able to create Parallels machines,
depending on the strategy you want to use to build the image. Packer supports
the following Parallels builders:
- [parallels-iso](/docs/builders/parallels-iso.html) - Starts from an ISO file,
creates a brand new Parallels VM, installs an OS, provisions software within
the OS, then exports that machine to create an image. This is best for people
who want to start from scratch.
- [parallels-iso](/docs/builders/parallels-iso.html) - Starts from an ISO
file, creates a brand new Parallels VM, installs an OS, provisions software
within the OS, then exports that machine to create an image. This is best
for people who want to start from scratch.
- [parallels-pvm](/docs/builders/parallels-pvm.html) - This builder imports an
existing PVM file, runs provisioners on top of that VM, and exports that
machine to create an image. This is best if you have an existing Parallels VM
export you want to use as the source. As an additional benefit, you can feed
the artifact of this builder back into itself to iterate on a machine.
- [parallels-pvm](/docs/builders/parallels-pvm.html) - This builder imports
an existing PVM file, runs provisioners on top of that VM, and exports that
machine to create an image. This is best if you have an existing Parallels
VM export you want to use as the source. As an additional benefit, you can
feed the artifact of this builder back into itself to iterate on a machine.
## Requirements

View File

@ -9,7 +9,8 @@ sidebar_current: 'docs-builders-profitbricks'
Type: `profitbricks`
The ProfitBricks Builder is able to create virtual machines for [ProfitBricks](https://www.profitbricks.com).
The ProfitBricks Builder is able to create virtual machines for
[ProfitBricks](https://www.profitbricks.com).
## Configuration Reference
@ -23,31 +24,43 @@ builder.
### Required
- `image` (string) - ProfitBricks volume image. Only Linux public images are supported. To obtain full list of available images you can use [ProfitBricks CLI](https://github.com/profitbricks/profitbricks-cli#image).
- `image` (string) - ProfitBricks volume image. Only Linux public images are
supported. To obtain full list of available images you can use
[ProfitBricks CLI](https://github.com/profitbricks/profitbricks-cli#image).
- `password` (string) - ProfitBricks password. This can be specified via environment variable \`PROFITBRICKS\_PASSWORD', if provided. The value defined in the config has precedence over environemnt variable.
- `password` (string) - ProfitBricks password. This can be specified via
environment variable \`PROFITBRICKS\_PASSWORD', if provided. The value
defined in the config has precedence over environemnt variable.
- `username` (string) - ProfitBricks username. This can be specified via environment variable \`PROFITBRICKS\_USERNAME', if provided. The value defined in the config has precedence over environemnt variable.
- `username` (string) - ProfitBricks username. This can be specified via
environment variable \`PROFITBRICKS\_USERNAME', if provided. The value
defined in the config has precedence over environemnt variable.
### Optional
- `cores` (number) - Amount of CPU cores to use for this build. Defaults to "4".
- `cores` (number) - Amount of CPU cores to use for this build. Defaults to
"4".
- `disk_size` (string) - Amount of disk space for this image in GB. Defaults to "50"
- `disk_size` (string) - Amount of disk space for this image in GB. Defaults
to "50"
- `disk_type` (string) - Type of disk to use for this image. Defaults to "HDD".
- `disk_type` (string) - Type of disk to use for this image. Defaults to
"HDD".
- `location` (string) - Defaults to "us/las".
- `ram` (number) - Amount of RAM to use for this image. Defaults to "2048".
- `retries` (string) - Number of retries Packer will make status requests while waiting for the build to complete. Default value 120 seconds.
- `retries` (string) - Number of retries Packer will make status requests
while waiting for the build to complete. Default value 120 seconds.
- `snapshot_name` (string) - If snapshot name is not provided Packer will generate it
- `snapshot_name` (string) - If snapshot name is not provided Packer will
generate it
- `snapshot_password` (string) - Password for the snapshot.
- `url` (string) - Endpoint for the ProfitBricks REST API. Default URL "<https://api.profitbricks.com/rest/v2>"
- `url` (string) - Endpoint for the ProfitBricks REST API. Default URL
"<https://api.profitbricks.com/rest/v2>"
## Example

View File

@ -1,28 +1,27 @@
---
description: |
The Scaleway Packer builder is able to create new images for use with Scaleway
BareMetal and Virtual cloud server. The builder takes a source image, runs any
provisioning necessary on the image after launching it, then snapshots it into
a reusable image. This reusable image can then be used as the foundation of new
servers that are launched within Scaleway.
layout: docs
sidebar_current: docs-builders-scaleway
page_title: Scaleway - Builders
description: |-
The Scaleway Packer builder is able to create new images for use with
Scaleway BareMetal and Virtual cloud server. The builder takes a source
image, runs any provisioning necessary on the image after launching it, then
snapshots it into a reusable image. This reusable image can then be used as
the foundation of new servers that are launched within Scaleway.
page_title: 'Scaleway - Builders'
sidebar_current: 'docs-builders-scaleway'
---
# Scaleway Builder
Type: `scaleway`
The `scaleway` Packer builder is able to create new images for use with
[Scaleway](https://www.scaleway.com). The builder takes a source image,
runs any provisioning necessary on the image after launching it, then snapshots
it into a reusable image. This reusable image can then be used as the foundation
[Scaleway](https://www.scaleway.com). The builder takes a source image, runs
any provisioning necessary on the image after launching it, then snapshots it
into a reusable image. This reusable image can then be used as the foundation
of new servers that are launched within Scaleway.
The builder does *not* manage snapshots. Once it creates an image, it is up to you
to use it or delete it.
The builder does *not* manage snapshots. Once it creates an image, it is up to
you to use it or delete it.
## Configuration Reference
@ -36,17 +35,16 @@ builder.
### Required:
- `api_access_key` (string) - The organization access key to use to identify your
organization. It can also be specified via environment variable
- `api_access_key` (string) - The organization access key to use to identify
your organization. It can also be specified via environment variable
`SCALEWAY_API_ACCESS_KEY`. Your access key is available in the
["Credentials" section](https://cloud.scaleway.com/#/credentials) of the
control panel.
- `api_token` (string) - The token to use to authenticate with your
account. It can also be specified via environment variable
`SCALEWAY_API_TOKEN`. You can see and generate tokens in the
["Credentials" section](https://cloud.scaleway.com/#/credentials) of the
control panel.
- `api_token` (string) - The token to use to authenticate with your account.
It can also be specified via environment variable `SCALEWAY_API_TOKEN`. You
can see and generate tokens in the ["Credentials"
section](https://cloud.scaleway.com/#/credentials) of the control panel.
- `image` (string) - The UUID of the base image to use. This is the image
that will be used to launch a new server and provision it. See
@ -58,9 +56,10 @@ builder.
available.
- `commercial_type` (string) - The name of the server commercial type:
`ARM64-128GB`, `ARM64-16GB`, `ARM64-2GB`, `ARM64-32GB`, `ARM64-4GB`, `ARM64-64GB`,
`ARM64-8GB`, `C1`, `C2L`, `C2M`, `C2S`, `START1-L`, `START1-M`, `START1-S`,
`START1-XS`, `X64-120GB`, `X64-15GB`, `X64-30GB`, `X64-60GB`
`ARM64-128GB`, `ARM64-16GB`, `ARM64-2GB`, `ARM64-32GB`, `ARM64-4GB`,
`ARM64-64GB`, `ARM64-8GB`, `C1`, `C2L`, `C2M`, `C2S`, `START1-L`,
`START1-M`, `START1-S`, `START1-XS`, `X64-120GB`, `X64-15GB`, `X64-30GB`,
`X64-60GB`
### Optional:
@ -73,18 +72,18 @@ builder.
- `snapshot_name` (string) - The name of the resulting snapshot that will
appear in your account. Default `packer-TIMESTAMP`
- `boottype` (string) - The type of boot, can be either `local` or `bootscript`,
Default `bootscript`
- `boottype` (string) - The type of boot, can be either `local` or
`bootscript`, Default `bootscript`
- `bootscript` (string) - The id of an existing bootscript to use when booting
the server.
- `bootscript` (string) - The id of an existing bootscript to use when
booting the server.
## Basic Example
Here is a basic example. It is completely valid as soon as you enter your own
access tokens:
```json
``` json
{
"type": "scaleway",
"api_access_key": "YOUR API ACCESS KEY",

View File

@ -1,9 +1,9 @@
---
description: |
The triton Packer builder is able to create new images for use with Triton.
These images can be used with both the Joyent public cloud (which is powered
by Triton) as well with private Triton installations. This builder uses the
Triton Cloud API to create images.
These images can be used with both the Joyent public cloud (which is powered by
Triton) as well with private Triton installations. This builder uses the Triton
Cloud API to create images.
layout: docs
page_title: 'Triton - Builders'
sidebar_current: 'docs-builders-triton'
@ -30,12 +30,12 @@ This reusable image can then be used to launch new machines.
The builder does *not* manage images. Once it creates an image, it is up to you
to use it or delete it.
~&gt; **Private installations of Triton must have custom images enabled!** To use
the Triton builder with a private/on-prem installation of Joyent's Triton
software, you'll need an operator to manually
[enable custom images](https://docs.joyent.com/private-cloud/install/image-management)
after installing Triton. This is not a requirement for Joyent's public cloud
offering of Triton.
~&gt; **Private installations of Triton must have custom images enabled!** To
use the Triton builder with a private/on-prem installation of Joyent's Triton
software, you'll need an operator to manually [enable custom
images](https://docs.joyent.com/private-cloud/install/image-management) after
installing Triton. This is not a requirement for Joyent's public cloud offering
of Triton.
## Configuration Reference
@ -59,19 +59,19 @@ builder.
- `source_machine_image` (string) - The UUID of the image to base the new
image on. Triton supports multiple types of images, called 'brands' in
Triton / Joyent lingo, for contains and VM's. See the chapter [Containers
and virtual machines](https://docs.joyent.com/public-cloud/instances) in the
Joyent Triton documentation for detailed information. The following brands
are currently supported by this builder:`joyent` and`kvm`. The choice of
base image automatically decides the brand. On the Joyent public cloud a
valid `source_machine_image` could for example be
and virtual machines](https://docs.joyent.com/public-cloud/instances) in
the Joyent Triton documentation for detailed information. The following
brands are currently supported by this builder:`joyent` and`kvm`. The
choice of base image automatically decides the brand. On the Joyent public
cloud a valid `source_machine_image` could for example be
`70e3ae72-96b6-11e6-9056-9737fd4d0764` for version 16.3.1 of the 64bit
SmartOS base image (a 'joyent' brand image). `source_machine_image_filter` can
be used to populate this UUID.
SmartOS base image (a 'joyent' brand image). `source_machine_image_filter`
can be used to populate this UUID.
- `source_machine_package` (string) - The Triton package to use while building
the image. Does not affect (and does not have to be the same) as the package
which will be used for a VM instance running this image. On the Joyent
public cloud this could for example be `g3-standard-0.5-smartos`.
- `source_machine_package` (string) - The Triton package to use while
building the image. Does not affect (and does not have to be the same) as
the package which will be used for a VM instance running this image. On the
Joyent public cloud this could for example be `g3-standard-0.5-smartos`.
- `image_name` (string) - The name the finished image in Triton will be
assigned. Maximum 512 characters but should in practice be much shorter
@ -86,30 +86,30 @@ builder.
### Optional:
- `triton_url` (string) - The URL of the Triton cloud API to use. If omitted
it will default to the `us-sw-1` region of the Joyent Public cloud. If
you are using your own private Triton installation you will have to supply
the URL of the cloud API of your own Triton installation.
it will default to the `us-sw-1` region of the Joyent Public cloud. If you
are using your own private Triton installation you will have to supply the
URL of the cloud API of your own Triton installation.
- `triton_key_material` (string) - Path to the file in which the private key
of `triton_key_id` is stored. For example `/home/soandso/.ssh/id_rsa`. If
this is not specified, the SSH agent is used to sign requests with the
`triton_key_id` specified.
- `triton_user` (string) - The username of a user who has access to your Triton
account.
- `triton_user` (string) - The username of a user who has access to your
Triton account.
- `insecure_skip_tls_verify` - (bool) This allows skipping TLS verification of
the Triton endpoint. It is useful when connecting to a temporary Triton
- `insecure_skip_tls_verify` - (bool) This allows skipping TLS verification
of the Triton endpoint. It is useful when connecting to a temporary Triton
installation such as Cloud-On-A-Laptop which does not generally use a
certificate signed by a trusted root CA. The default is `false`.
- `source_machine_firewall_enabled` (boolean) - Whether or not the firewall of
the VM used to create an image of is enabled. The Triton firewall only
- `source_machine_firewall_enabled` (boolean) - Whether or not the firewall
of the VM used to create an image of is enabled. The Triton firewall only
filters inbound traffic to the VM. All outbound traffic is always allowed.
Currently this builder does not provide an interface to add specific
firewall rules. Unless you have a global rule defined in Triton which allows
SSH traffic enabling the firewall will interfere with the SSH provisioner.
The default is `false`.
firewall rules. Unless you have a global rule defined in Triton which
allows SSH traffic enabling the firewall will interfere with the SSH
provisioner. The default is `false`.
- `source_machine_metadata` (object of key/value strings) - Triton metadata
applied to the VM used to create the image. Metadata can be used to pass
@ -120,22 +120,22 @@ builder.
set the `user-script` metadata key to have Triton start a user supplied
script after the VM has booted.
- `source_machine_name` (string) - Name of the VM used for building the image.
Does not affect (and does not have to be the same) as the name for a VM
instance running this image. Maximum 512 characters but should in practice
be much shorter (think between 5 and 20 characters). For example
- `source_machine_name` (string) - Name of the VM used for building the
image. Does not affect (and does not have to be the same) as the name for a
VM instance running this image. Maximum 512 characters but should in
practice be much shorter (think between 5 and 20 characters). For example
`mysql-64-server-image-builder`. When omitted defaults to
`packer-builder-[image_name]`.
- `source_machine_networks` (array of strings) - The UUID's of Triton networks
added to the source machine used for creating the image. For example if any
of the provisioners which are run need Internet access you will need to add
the UUID's of the appropriate networks here. If this is not specified,
instances will be placed into the default Triton public and internal
networks.
- `source_machine_networks` (array of strings) - The UUID's of Triton
networks added to the source machine used for creating the image. For
example if any of the provisioners which are run need Internet access you
will need to add the UUID's of the appropriate networks here. If this is
not specified, instances will be placed into the default Triton public and
internal networks.
- `source_machine_tags` (object of key/value strings) - Tags applied to the VM
used to create the image.
- `source_machine_tags` (object of key/value strings) - Tags applied to the
VM used to create the image.
- `image_acls` (array of strings) - The UUID's of the users which will have
access to this image. When omitted only the owner (the Triton user whose
@ -144,16 +144,16 @@ builder.
- `image_description` (string) - Description of the image. Maximum 512
characters.
- `image_eula_url` (string) - URL of the End User License Agreement (EULA) for
the image. Maximum 128 characters.
- `image_eula_url` (string) - URL of the End User License Agreement (EULA)
for the image. Maximum 128 characters.
- `image_homepage` (string) - URL of the homepage where users can find
information about the image. Maximum 128 characters.
- `image_tags` (object of key/value strings) - Tag applied to the image.
- `source_machine_image_filter` (object) - Filters used to populate the `source_machine_image` field.
Example:
- `source_machine_image_filter` (object) - Filters used to populate the
`source_machine_image` field. Example:
``` json
{
@ -167,8 +167,7 @@ builder.
## Basic Example
Below is a minimal example to create an image on the Joyent public
cloud:
Below is a minimal example to create an image on the Joyent public cloud:
``` json
{
@ -203,8 +202,8 @@ users to be able to login via SSH with the same key used to create the VM via
the Cloud API. In more advanced scenarios for example when using a
`source_machine_image` one might use different credentials.
Available `triton_key_id`, `source_machine_package`, `source_machine_image`, and
`source_machine_networks` can be found by using the following
[Triton CLI](https://docs.joyent.com/public-cloud/api-access/cloudapi)
commands: `triton key list`, `triton package list`, `triton image list`, and
Available `triton_key_id`, `source_machine_package`, `source_machine_image`,
and `source_machine_networks` can be found by using the following [Triton
CLI](https://docs.joyent.com/public-cloud/api-access/cloudapi) commands:
`triton key list`, `triton package list`, `triton image list`, and
`triton network list` respectively.

View File

@ -1,7 +1,7 @@
---
description: |
The VirtualBox Packer builder is able to create VirtualBox virtual machines
and export them in the OVA or OVF format.
The VirtualBox Packer builder is able to create VirtualBox virtual machines and
export them in the OVA or OVF format.
layout: docs
page_title: 'VirtualBox - Builders'
sidebar_current: 'docs-builders-virtualbox'
@ -10,20 +10,21 @@ sidebar_current: 'docs-builders-virtualbox'
# VirtualBox Builder
The VirtualBox Packer builder is able to create
[VirtualBox](https://www.virtualbox.org) virtual machines and export them in the
OVA or OVF format.
[VirtualBox](https://www.virtualbox.org) virtual machines and export them in
the OVA or OVF format.
Packer actually comes with multiple builders able to create VirtualBox machines,
depending on the strategy you want to use to build the image. Packer supports
the following VirtualBox builders:
Packer actually comes with multiple builders able to create VirtualBox
machines, depending on the strategy you want to use to build the image. Packer
supports the following VirtualBox builders:
- [virtualbox-iso](/docs/builders/virtualbox-iso.html) - Starts from an ISO
file, creates a brand new VirtualBox VM, installs an OS, provisions software
within the OS, then exports that machine to create an image. This is best for
people who want to start from scratch.
file, creates a brand new VirtualBox VM, installs an OS, provisions
software within the OS, then exports that machine to create an image. This
is best for people who want to start from scratch.
- [virtualbox-ovf](/docs/builders/virtualbox-ovf.html) - This builder imports an
existing OVF/OVA file, runs provisioners on top of that VM, and exports that
machine to create an image. This is best if you have an existing VirtualBox VM
export you want to use as the source. As an additional benefit, you can feed
the artifact of this builder back into itself to iterate on a machine.
- [virtualbox-ovf](/docs/builders/virtualbox-ovf.html) - This builder imports
an existing OVF/OVA file, runs provisioners on top of that VM, and exports
that machine to create an image. This is best if you have an existing
VirtualBox VM export you want to use as the source. As an additional
benefit, you can feed the artifact of this builder back into itself to
iterate on a machine.

View File

@ -9,21 +9,21 @@ sidebar_current: 'docs-builders-vmware'
# VMware Builder
The VMware Packer builder is able to create VMware virtual machines for use with
any VMware product.
The VMware Packer builder is able to create VMware virtual machines for use
with any VMware product.
Packer actually comes with multiple builders able to create VMware machines,
depending on the strategy you want to use to build the image. Packer supports
the following VMware builders:
- [vmware-iso](/docs/builders/vmware-iso.html) - Starts from an ISO file,
creates a brand new VMware VM, installs an OS, provisions software within the
OS, then exports that machine to create an image. This is best for people who
want to start from scratch.
creates a brand new VMware VM, installs an OS, provisions software within
the OS, then exports that machine to create an image. This is best for
people who want to start from scratch.
- [vmware-vmx](/docs/builders/vmware-vmx.html) - This builder imports an
existing VMware machine (from a VMX file), runs provisioners on top of that
VM, and exports that machine to create an image. This is best if you have an
existing VMware VM you want to use as the source. As an additional benefit,
you can feed the artifact of this builder back into Packer to iterate on a
machine.
VM, and exports that machine to create an image. This is best if you have
an existing VMware VM you want to use as the source. As an additional
benefit, you can feed the artifact of this builder back into Packer to
iterate on a machine.

View File

@ -11,37 +11,38 @@ sidebar_current: 'docs-commands-build'
# `build` Command
The `packer build` command takes a template and runs all the builds within it in
order to generate a set of artifacts. The various builds specified within a
template are executed in parallel, unless otherwise specified. And the artifacts
that are created will be outputted at the end of the build.
The `packer build` command takes a template and runs all the builds within it
in order to generate a set of artifacts. The various builds specified within a
template are executed in parallel, unless otherwise specified. And the
artifacts that are created will be outputted at the end of the build.
## Options
- `-color=false` - Disables colorized output. Enabled by default.
- `-debug` - Disables parallelization and enables debug mode. Debug mode flags
the builders that they should output debugging information. The exact behavior
of debug mode is left to the builder. In general, builders usually will stop
between each step, waiting for keyboard input before continuing. This will
allow the user to inspect state and so on.
- `-debug` - Disables parallelization and enables debug mode. Debug mode
flags the builders that they should output debugging information. The exact
behavior of debug mode is left to the builder. In general, builders usually
will stop between each step, waiting for keyboard input before continuing.
This will allow the user to inspect state and so on.
- `-except=foo,bar,baz` - Builds all the builds except those with the given
comma-separated names. Build names by default are the names of their builders,
unless a specific `name` attribute is specified within the configuration.
comma-separated names. Build names by default are the names of their
builders, unless a specific `name` attribute is specified within the
configuration.
- `-force` - Forces a builder to run when artifacts from a previous build
prevent a build from running. The exact behavior of a forced build is left to
the builder. In general, a builder supporting the forced build will remove the
artifacts from the previous build. This will allow the user to repeat a build
without having to manually clean these artifacts beforehand.
prevent a build from running. The exact behavior of a forced build is left
to the builder. In general, a builder supporting the forced build will
remove the artifacts from the previous build. This will allow the user to
repeat a build without having to manually clean these artifacts beforehand.
- `-on-error=cleanup` (default), `-on-error=abort`, `-on-error=ask` - Selects
what to do when the build fails. `cleanup` cleans up after the previous
steps, deleting temporary files and virtual machines. `abort` exits without
any cleanup, which might require the next build to use `-force`. `ask`
presents a prompt and waits for you to decide to clean up, abort, or retry the
failed step.
presents a prompt and waits for you to decide to clean up, abort, or retry
the failed step.
- `-only=foo,bar,baz` - Only build the builds with the given comma-separated
names. Build names by default are the names of their builders, unless a

View File

@ -28,14 +28,15 @@ If fixing fails for any reason, the fix command will exit with a non-zero exit
status. Error messages appear on standard error, so if you're redirecting
output, you'll still see error messages.
-&gt; **Even when Packer fix doesn't do anything** to the template, the template
will be outputted to standard out. Things such as configuration key ordering and
indentation may be changed. The output format however, is pretty-printed for
human readability.
-&gt; **Even when Packer fix doesn't do anything** to the template, the
template will be outputted to standard out. Things such as configuration key
ordering and indentation may be changed. The output format however, is
pretty-printed for human readability.
The full list of fixes that the fix command performs is visible in the help
output, which can be seen via `packer fix -h`.
## Options
- `-validate=false` - Disables validation of the fixed template. True by default.
- `-validate=false` - Disables validation of the fixed template. True by
default.

View File

@ -12,11 +12,11 @@ sidebar_current: 'docs-commands'
# Packer Commands (CLI)
Packer is controlled using a command-line interface. All interaction with Packer
is done via the `packer` tool. Like many other command-line tools, the `packer`
tool takes a subcommand to execute, and that subcommand may have additional
options as well. Subcommands are executed with `packer SUBCOMMAND`, where
"SUBCOMMAND" is the actual command you wish to execute.
Packer is controlled using a command-line interface. All interaction with
Packer is done via the `packer` tool. Like many other command-line tools, the
`packer` tool takes a subcommand to execute, and that subcommand may have
additional options as well. Subcommands are executed with `packer SUBCOMMAND`,
where "SUBCOMMAND" is the actual command you wish to execute.
If you run `packer` by itself, help will be displayed showing all available
subcommands and a brief synopsis of what they do. In addition to this, you can
@ -32,8 +32,8 @@ subcommand using the navigation to the left.
By default, the output of Packer is very human-readable. It uses nice
formatting, spacing, and colors in order to make Packer a pleasure to use.
However, Packer was built with automation in mind. To that end, Packer supports
a fully machine-readable output setting, allowing you to use Packer in automated
environments.
a fully machine-readable output setting, allowing you to use Packer in
automated environments.
Because the machine-readable output format was made with Unix tools in mind, it
is `awk`/`sed`/`grep`/etc. friendly and provides a familiar interface without
@ -58,15 +58,15 @@ The format will be covered in more detail later. But as you can see, the output
immediately becomes machine-friendly. Try some other commands with the
`-machine-readable` flag to see!
~&gt; The `-machine-readable` flag is designed for automated environments and is
mutually-exclusive with the `-debug` flag, which is designed for interactive
~&gt; The `-machine-readable` flag is designed for automated environments and
is mutually-exclusive with the `-debug` flag, which is designed for interactive
environments.
### Format for Machine-Readable Output
The machine readable format is a line-oriented, comma-delimited text format.
This makes it more convenient to parse using standard Unix tools such as `awk` or
`grep` in addition to full programming languages like Ruby or Python.
This makes it more convenient to parse using standard Unix tools such as `awk`
or `grep` in addition to full programming languages like Ruby or Python.
The format is:
@ -78,18 +78,17 @@ Each component is explained below:
- `timestamp` is a Unix timestamp in UTC of when the message was printed.
- `target` When you call `packer build` this can be either empty or
individual build names, e.g. `amazon-ebs`. It is normally empty when builds
are in progress, and the build name when artifacts of particular builds are
being referred to.
- `target` When you call `packer build` this can be either empty or individual
build names, e.g. `amazon-ebs`. It is normally empty when builds are in
progress, and the build name when artifacts of particular builds are being
referred to.
- `type` is the type of machine-readable message being outputted. The two
most common `type`s are `ui` and `artifact`
- `type` is the type of machine-readable message being outputted. The two most
common `type`s are `ui` and `artifact`
- `data` is zero or more comma-separated values associated with the prior type.
The exact amount and meaning of this data is type-dependent, so you must read
the documentation associated with the type to understand fully.
- `data` is zero or more comma-separated values associated with the prior
type. The exact amount and meaning of this data is type-dependent, so you
must read the documentation associated with the type to understand fully.
Within the format, if data contains a comma, it is replaced with
`%!(PACKER_COMMA)`. This was preferred over an escape character such as `\'`
@ -105,66 +104,65 @@ Here's an incomplete list of types you may see in the machine-readable output:
You'll see these data types when you run `packer build`:
- `ui`: this means that the information being provided is a human-readable string
that would be sent to stdout even if we aren't in machine-readable mode. There
are three "data" subtypes associated with this type:
- `ui`: this means that the information being provided is a human-readable
string that would be sent to stdout even if we aren't in machine-readable
mode. There are three "data" subtypes associated with this type:
- `say`: in a non-machine-readable format, this would be bolded. Normally it is
used for anouncements about beginning new steps in the build process
- `say`: in a non-machine-readable format, this would be bolded. Normally
it is used for anouncements about beginning new steps in the build
process
- `message`: the most commonly used message type, used for basic updates during
the build process.
- `message`: the most commonly used message type, used for basic updates
during the build process.
- `error`: reserved for errors
- `error`: reserved for errors
- `artifact-count`: This data type tells you how many artifacts a particular
build produced.
- `artifact-count`: This data type tells you how many artifacts a particular
build produced.
- `artifact`: This data type tells you information about what Packer created
during its build. An example of output follows the pattern
`timestamp, buildname, artifact, artifact_number, key, value` where `key` and
`value` contain information about the artifact.
- `artifact`: This data type tells you information about what Packer created
during its build. An example of output follows the pattern
`timestamp, buildname, artifact, artifact_number, key, value` where `key`
and `value` contain information about the artifact.
For example:
For example:
```
1539967803,,ui,say,\n==> Builds finished. The artifacts of successful builds are:
1539967803,amazon-ebs,artifact-count,2
1539967803,amazon-ebs,artifact,0,builder-id,mitchellh.amazonebs
1539967803,amazon-ebs,artifact,0,id,eu-west-1:ami-04d23aca8bdd36e30
1539967803,amazon-ebs,artifact,0,string,AMIs were created:\neu-west-1: ami-04d23aca8bdd36e30\n
1539967803,amazon-ebs,artifact,0,files-count,0
1539967803,amazon-ebs,artifact,0,end
1539967803,,ui,say,--> amazon-ebs: AMIs were created:\neu-west-1: ami-04d23aca8bdd36e30\n
1539967803,amazon-ebs,artifact,1,builder-id,
1539967803,amazon-ebs,artifact,1,id,
1539967803,amazon-ebs,artifact,1,string,
1539967803,amazon-ebs,artifact,1,files-count,0
2018/10/19 09:50:03 waiting for all plugin processes to complete...
1539967803,amazon-ebs,artifact,1,end
```
```
1539967803,,ui,say,\n==> Builds finished. The artifacts of successful builds are:
1539967803,amazon-ebs,artifact-count,2
1539967803,amazon-ebs,artifact,0,builder-id,mitchellh.amazonebs
1539967803,amazon-ebs,artifact,0,id,eu-west-1:ami-04d23aca8bdd36e30
1539967803,amazon-ebs,artifact,0,string,AMIs were created:\neu-west-1: ami-04d23aca8bdd36e30\n
1539967803,amazon-ebs,artifact,0,files-count,0
1539967803,amazon-ebs,artifact,0,end
1539967803,,ui,say,--> amazon-ebs: AMIs were created:\neu-west-1: ami-04d23aca8bdd36e30\n
1539967803,amazon-ebs,artifact,1,builder-id,
1539967803,amazon-ebs,artifact,1,id,
1539967803,amazon-ebs,artifact,1,string,
1539967803,amazon-ebs,artifact,1,files-count,0
2018/10/19 09:50:03 waiting for all plugin processes to complete...
1539967803,amazon-ebs,artifact,1,end
```
You'll see these data types when you run `packer version`:
- `version`: what version of Packer is running
- `version`: what version of Packer is running
- `version-prerelease`: Data will contain `dev` if version is prerelease, and
otherwise will be blank.
- `version-prerelease`: Data will contain `dev` if version is prerelease, and
otherwise will be blank.
- `version-commit`: The git hash for the commit that the branch of Packer is
currently on; most useful for Packer developers.
- `version-commit`: The git hash for the commit that the branch of Packer is
currently on; most useful for Packer developers.
## Autocompletion
The `packer` command features opt-in subcommand autocompletion that you can
enable for your shell with `packer -autocomplete-install`. After doing so,
you can invoke a new shell and use the feature.
enable for your shell with `packer -autocomplete-install`. After doing so, you
can invoke a new shell and use the feature.
For example, assume a tab is typed at the end of each prompt line:
```
$ packer p
plugin build
$ packer build -
-color -debug -except -force -machine-readable -on-error -only -parallel -timestamp -var -var-file
```
$ packer p
plugin build
$ packer build -
-color -debug -except -force -machine-readable -on-error -only -parallel -timestamp -var -var-file

View File

@ -1,10 +1,10 @@
---
description: |
The `packer inspect` command takes a template and outputs the various
components a template defines. This can help you quickly learn about a
template without having to dive into the JSON itself. The command will tell
you things like what variables a template accepts, the builders it defines,
the provisioners it defines and the order they'll run, and more.
components a template defines. This can help you quickly learn about a template
without having to dive into the JSON itself. The command will tell you things
like what variables a template accepts, the builders it defines, the
provisioners it defines and the order they'll run, and more.
layout: docs
page_title: 'packer inspect - Commands'
sidebar_current: 'docs-commands-inspect'
@ -12,19 +12,19 @@ sidebar_current: 'docs-commands-inspect'
# `inspect` Command
The `packer inspect` command takes a template and outputs the various components
a template defines. This can help you quickly learn about a template without
having to dive into the JSON itself. The command will tell you things like what
variables a template accepts, the builders it defines, the provisioners it
defines and the order they'll run, and more.
The `packer inspect` command takes a template and outputs the various
components a template defines. This can help you quickly learn about a template
without having to dive into the JSON itself. The command will tell you things
like what variables a template accepts, the builders it defines, the
provisioners it defines and the order they'll run, and more.
This command is extra useful when used with
[machine-readable output](/docs/commands/index.html) enabled. The
command outputs the components in a way that is parseable by machines.
This command is extra useful when used with [machine-readable
output](/docs/commands/index.html) enabled. The command outputs the components
in a way that is parseable by machines.
The command doesn't validate the actual configuration of the various components
(that is what the `validate` command is for), but it will validate the syntax of
your template by necessity.
(that is what the `validate` command is for), but it will validate the syntax
of your template by necessity.
## Usage Example

View File

@ -12,10 +12,10 @@ sidebar_current: 'docs-commands-validate'
# `validate` Command
The `packer validate` Packer command is used to validate the syntax and
configuration of a [template](/docs/templates/index.html). The command
will return a zero exit status on success, and a non-zero exit status on
failure. Additionally, if a template doesn't validate, any error messages will
be outputted.
configuration of a [template](/docs/templates/index.html). The command will
return a zero exit status on success, and a non-zero exit status on failure.
Additionally, if a template doesn't validate, any error messages will be
outputted.
Example usage:
@ -30,12 +30,13 @@ Errors validating build 'vmware'. 1 error(s) occurred:
## Options
- `-syntax-only` - Only the syntax of the template is checked. The configuration
is not validated.
- `-syntax-only` - Only the syntax of the template is checked. The
configuration is not validated.
- `-except=foo,bar,baz` - Builds all the builds except those with the given
comma-separated names. Build names by default are the names of their builders,
unless a specific `name` attribute is specified within the configuration.
comma-separated names. Build names by default are the names of their
builders, unless a specific `name` attribute is specified within the
configuration.
- `-only=foo,bar,baz` - Only build the builds with the given comma-separated
names. Build names by default are the names of their builders, unless a

View File

@ -9,9 +9,9 @@ sidebar_current: 'docs-extending-custom-builders'
# Custom Builders
Packer Builders are the components of Packer responsible for creating a machine,
bringing it to a point where it can be provisioned, and then turning that
provisioned machine into some sort of machine image. Several builders are
Packer Builders are the components of Packer responsible for creating a
machine, bringing it to a point where it can be provisioned, and then turning
that provisioned machine into some sort of machine image. Several builders are
officially distributed with Packer itself, such as the AMI builder, the VMware
builder, etc. However, it is possible to write custom builders using the Packer
plugin interface, and this page documents how to do that.
@ -46,8 +46,8 @@ method is responsible for translating this configuration into an internal
structure, validating it, and returning any errors.
For multiple parameters, they should be merged together into the final
configuration, with later parameters overwriting any previous configuration. The
exact semantics of the merge are left to the builder author.
configuration, with later parameters overwriting any previous configuration.
The exact semantics of the merge are left to the builder author.
For decoding the `interface{}` into a meaningful structure, the
[mapstructure](https://github.com/mitchellh/mapstructure) library is
@ -55,25 +55,25 @@ recommended. Mapstructure will take an `interface{}` and decode it into an
arbitrarily complex struct. If there are any errors, it generates very human
friendly errors that can be returned directly from the prepare method.
While it is not actively enforced, **no side effects** should occur from running
the `Prepare` method. Specifically, don't create files, don't launch virtual
machines, etc. Prepare's purpose is solely to configure the builder and validate
the configuration.
While it is not actively enforced, **no side effects** should occur from
running the `Prepare` method. Specifically, don't create files, don't launch
virtual machines, etc. Prepare's purpose is solely to configure the builder and
validate the configuration.
In addition to normal configuration, Packer will inject a
`map[string]interface{}` with a key of `packer.DebugConfigKey` set to boolean
`true` if debug mode is enabled for the build. If this is set to true, then the
builder should enable a debug mode which assists builder developers and advanced
users to introspect what is going on during a build. During debug builds,
parallelism is strictly disabled, so it is safe to request input from stdin and
so on.
builder should enable a debug mode which assists builder developers and
advanced users to introspect what is going on during a build. During debug
builds, parallelism is strictly disabled, so it is safe to request input from
stdin and so on.
### The "Run" Method
`Run` is where all the interesting stuff happens. Run is executed, often in
parallel for multiple builders, to actually build the machine, provision it, and
create the resulting machine image, which is returned as an implementation of
the `packer.Artifact` interface.
parallel for multiple builders, to actually build the machine, provision it,
and create the resulting machine image, which is returned as an implementation
of the `packer.Artifact` interface.
The `Run` method takes three parameters. These are all very useful. The
`packer.Ui` object is used to send output to the console. `packer.Hook` is used
@ -117,8 +117,8 @@ follow the practice of making the ID contain my GitHub username and then the
platform it is building for. For example, the builder ID of the VMware builder
is "hashicorp.vmware" or something similar.
Post-processors use the builder ID value in order to make some assumptions about
the artifact results, so it is important it never changes.
Post-processors use the builder ID value in order to make some assumptions
about the artifact results, so it is important it never changes.
Other than the builder ID, the rest should be self-explanatory by reading the
[packer.Artifact interface
@ -147,22 +147,22 @@ they aren't documented here other than to tell you how to hook in provisioners.
## Caching Files
It is common for some builders to deal with very large files, or files that take
a long time to generate. For example, the VMware builder has the capability to
download the operating system ISO from the internet. This is timely process, so
it would be convenient to cache the file. This sort of caching is a core part of
Packer that is exposed to builders.
It is common for some builders to deal with very large files, or files that
take a long time to generate. For example, the VMware builder has the
capability to download the operating system ISO from the internet. This is
timely process, so it would be convenient to cache the file. This sort of
caching is a core part of Packer that is exposed to builders.
The cache interface is `packer.Cache`. It behaves much like a Go
[RWMutex](https://golang.org/pkg/sync/#RWMutex). The builder requests a "lock" on
certain cache keys, and is given exclusive access to that key for the duration
of the lock. This locking mechanism allows multiple builders to share cache data
even though they're running in parallel.
[RWMutex](https://golang.org/pkg/sync/#RWMutex). The builder requests a "lock"
on certain cache keys, and is given exclusive access to that key for the
duration of the lock. This locking mechanism allows multiple builders to share
cache data even though they're running in parallel.
For example, both the VMware and VirtualBox builders support downloading an
operating system ISO from the internet. Most of the time, this ISO is identical.
The locking mechanisms of the cache allow one of the builders to download it
only once, but allow both builders to share the downloaded file.
operating system ISO from the internet. Most of the time, this ISO is
identical. The locking mechanisms of the cache allow one of the builders to
download it only once, but allow both builders to share the downloaded file.
The [documentation for
packer.Cache](https://github.com/hashicorp/packer/blob/master/packer/cache.go)

View File

@ -1,7 +1,7 @@
---
description: |
Packer Post-processors are the components of Packer that transform one
artifact into another, for example by compressing files, or uploading them.
Packer Post-processors are the components of Packer that transform one artifact
into another, for example by compressing files, or uploading them.
layout: docs
page_title: 'Custom Post-Processors - Extending'
sidebar_current: 'docs-extending-custom-post-processors'
@ -44,8 +44,8 @@ type PostProcessor interface {
### The "Configure" Method
The `Configure` method for each post-processor is called early in the build
process to configure the post-processor. The configuration is passed in as a raw
`interface{}`. The configure method is responsible for translating this
process to configure the post-processor. The configuration is passed in as a
raw `interface{}`. The configure method is responsible for translating this
configuration into an internal structure, validating it, and returning any
errors.
@ -55,27 +55,28 @@ recommended. Mapstructure will take an `interface{}` and decode it into an
arbitrarily complex struct. If there are any errors, it generates very
human-friendly errors that can be returned directly from the configure method.
While it is not actively enforced, **no side effects** should occur from running
the `Configure` method. Specifically, don't create files, don't create network
connections, etc. Configure's purpose is solely to setup internal state and
validate the configuration as much as possible.
While it is not actively enforced, **no side effects** should occur from
running the `Configure` method. Specifically, don't create files, don't create
network connections, etc. Configure's purpose is solely to setup internal state
and validate the configuration as much as possible.
`Configure` being run is not an indication that `PostProcess` will ever run. For
example, `packer validate` will run `Configure` to verify the configuration
`Configure` being run is not an indication that `PostProcess` will ever run.
For example, `packer validate` will run `Configure` to verify the configuration
validates, but will never actually run the build.
### The "PostProcess" Method
The `PostProcess` method is where the real work goes. PostProcess is responsible
for taking one `packer.Artifact` implementation, and transforming it into
another.
The `PostProcess` method is where the real work goes. PostProcess is
responsible for taking one `packer.Artifact` implementation, and transforming
it into another.
When we say "transform," we don't mean actually modifying the existing
`packer.Artifact` value itself. We mean taking the contents of the artifact and
creating a new artifact from that. For example, if we were creating a "compress"
post-processor that is responsible for compressing files, the transformation
would be taking the `Files()` from the original artifact, compressing them, and
creating a new artifact with a single file: the compressed archive.
creating a new artifact from that. For example, if we were creating a
"compress" post-processor that is responsible for compressing files, the
transformation would be taking the `Files()` from the original artifact,
compressing them, and creating a new artifact with a single file: the
compressed archive.
The result signature of this method is `(Artifact, bool, error)`. Each return
value is explained below:
@ -86,5 +87,5 @@ value is explained below:
generally want intermediary artifacts. However, some post-processors depend
on the previous artifact existing. If this is `true`, it forces packer to
keep the artifact around.
- `error` - Non-nil if there was an error in any way. If this is the case, the
other two return values are ignored.
- `error` - Non-nil if there was an error in any way. If this is the case,
the other two return values are ignored.

View File

@ -1,8 +1,8 @@
---
description: |
Packer Provisioners are the components of Packer that install and configure
software into a running machine prior to turning that machine into an image.
An example of a provisioner is the shell provisioner, which runs shell scripts
software into a running machine prior to turning that machine into an image. An
example of a provisioner is the shell provisioner, which runs shell scripts
within the machines.
layout: docs
page_title: 'Custom Provisioners - Extending'
@ -14,8 +14,8 @@ sidebar_current: 'docs-extending-custom-provisioners'
Packer Provisioners are the components of Packer that install and configure
software into a running machine prior to turning that machine into an image. An
example of a provisioner is the [shell
provisioner](/docs/provisioners/shell.html), which runs shell scripts within the
machines.
provisioner](/docs/provisioners/shell.html), which runs shell scripts within
the machines.
Prior to reading this page, it is assumed you have read the page on [plugin
development basics](/docs/extending/plugins.html).
@ -49,8 +49,8 @@ method is responsible for translating this configuration into an internal
structure, validating it, and returning any errors.
For multiple parameters, they should be merged together into the final
configuration, with later parameters overwriting any previous configuration. The
exact semantics of the merge are left to the builder author.
configuration, with later parameters overwriting any previous configuration.
The exact semantics of the merge are left to the builder author.
For decoding the `interface{}` into a meaningful structure, the
[mapstructure](https://github.com/mitchellh/mapstructure) library is
@ -58,10 +58,10 @@ recommended. Mapstructure will take an `interface{}` and decode it into an
arbitrarily complex struct. If there are any errors, it generates very human
friendly errors that can be returned directly from the prepare method.
While it is not actively enforced, **no side effects** should occur from running
the `Prepare` method. Specifically, don't create files, don't launch virtual
machines, etc. Prepare's purpose is solely to configure the builder and validate
the configuration.
While it is not actively enforced, **no side effects** should occur from
running the `Prepare` method. Specifically, don't create files, don't launch
virtual machines, etc. Prepare's purpose is solely to configure the builder and
validate the configuration.
The `Prepare` method is called very early in the build process so that errors
may be displayed to the user before anything actually happens.

View File

@ -12,5 +12,5 @@ sidebar_current: 'docs-extending'
Packer is designed to be extensible. Because the surface area for workloads is
infinite, Packer supports plugins for builders, provisioners, and
post-processors. To learn more about the different customizations, please choose
a link from the sidebar.
post-processors. To learn more about the different customizations, please
choose a link from the sidebar.

View File

@ -14,12 +14,12 @@ Packer Plugins allow new functionality to be added to Packer without modifying
the core source code. Packer plugins are able to add new builders,
provisioners, hooks, and more. In fact, much of Packer itself is implemented by
writing plugins that are simply distributed with Packer. For example, all the
builders, provisioners, and more that ship with Packer are implemented
as Plugins that are simply hardcoded to load with Packer.
builders, provisioners, and more that ship with Packer are implemented as
Plugins that are simply hardcoded to load with Packer.
This section will cover how to install and use plugins. If you're interested in
developing plugins, the documentation for that is available below, in the [developing
plugins](#developing-plugins) section.
developing plugins, the documentation for that is available below, in the
[developing plugins](#developing-plugins) section.
Because Packer is so young, there is no official listing of available Packer
plugins. Plugins are best found via Google. Typically, searching "packer plugin
@ -28,8 +28,8 @@ official plugin directory is planned.
## How Plugins Work
Packer plugins are completely separate, standalone applications that the core of
Packer starts and communicates with.
Packer plugins are completely separate, standalone applications that the core
of Packer starts and communicates with.
These plugin applications aren't meant to be run manually. Instead, Packer core
executes them as a sub-process, run as a sub-command (`packer plugin`) and
@ -43,28 +43,30 @@ applications running.
The easiest way to install a plugin is to name it correctly, then place it in
the proper directory. To name a plugin correctly, make sure the binary is named
`packer-TYPE-NAME`. For example, `packer-builder-amazon-ebs` for a "builder"
type plugin named "amazon-ebs". Valid types for plugins are down this page more.
type plugin named "amazon-ebs". Valid types for plugins are down this page
more.
Once the plugin is named properly, Packer automatically discovers plugins in the
following directories in the given order. If a conflicting plugin is found
Once the plugin is named properly, Packer automatically discovers plugins in
the following directories in the given order. If a conflicting plugin is found
later, it will take precedence over one found earlier.
1. The directory where `packer` is, or the executable directory.
2. `~/.packer.d/plugins` on Unix systems or `%APPDATA%/packer.d/plugins`
on Windows.
2. `~/.packer.d/plugins` on Unix systems or `%APPDATA%/packer.d/plugins` on
Windows.
3. The current working directory.
The valid types for plugins are:
- `builder` - Plugins responsible for building images for a specific platform.
- `builder` - Plugins responsible for building images for a specific
platform.
- `post-processor` - A post-processor responsible for taking an artifact from
a builder and turning it into something else.
- `provisioner` - A provisioner to install software on images created by
a builder.
- `provisioner` - A provisioner to install software on images created by a
builder.
## Developing Plugins
@ -86,8 +88,8 @@ recommend getting a bit more comfortable before you dive into writing plugins.
Packer has a fairly unique plugin architecture. Instead of loading plugins
directly into a running application, Packer runs each plugin as a *separate
application*. Inter-process communication and RPC is then used to communicate
between the many running Packer processes. Packer core itself is responsible for
orchestrating the processes and handles cleanup.
between the many running Packer processes. Packer core itself is responsible
for orchestrating the processes and handles cleanup.
The beauty of this is that your plugin can have any dependencies it wants.
Dependencies don't need to line up with what Packer core or any other plugin
@ -103,17 +105,17 @@ process. Pretty cool.
### Plugin Development Basics
Developing a plugin allows you to create additional functionality for Packer.
All the various kinds of plugins have a corresponding interface. The plugin needs
to implement this interface and expose it using the Packer plugin package
All the various kinds of plugins have a corresponding interface. The plugin
needs to implement this interface and expose it using the Packer plugin package
(covered here shortly), and that's it!
There are two packages that really matter that every plugin must use. Other than
the following two packages, you're encouraged to use whatever packages you want.
Because plugins are their own processes, there is no danger of colliding
There are two packages that really matter that every plugin must use. Other
than the following two packages, you're encouraged to use whatever packages you
want. Because plugins are their own processes, there is no danger of colliding
dependencies.
- `github.com/hashicorp/packer` - Contains all the interfaces that you have to
implement for any given plugin.
- `github.com/hashicorp/packer` - Contains all the interfaces that you have
to implement for any given plugin.
- `github.com/hashicorp/packer/packer/plugin` - Contains the code to serve
the plugin. This handles all the inter-process communication stuff.
@ -123,8 +125,9 @@ There are two steps involved in creating a plugin:
1. Implement the desired interface. For example, if you're building a builder
plugin, implement the `packer.Builder` interface.
2. Serve the interface by calling the appropriate plugin serving method in your
main method. In the case of a builder, this is `plugin.RegisterBuilder`.
2. Serve the interface by calling the appropriate plugin serving method in
your main method. In the case of a builder, this is
`plugin.RegisterBuilder`.
A basic example is shown below. In this example, assume the `Builder` struct
implements the `packer.Builder` interface:
@ -160,12 +163,12 @@ plugins will continue to work with the version of Packer you lock to.
### Logging and Debugging
Plugins can use the standard Go `log` package to log. Anything logged using this
will be available in the Packer log files automatically. The Packer log is
Plugins can use the standard Go `log` package to log. Anything logged using
this will be available in the Packer log files automatically. The Packer log is
visible on stderr when the `PACKER_LOG` environmental is set.
Packer will prefix any logs from plugins with the path to that plugin to make it
identifiable where the logs come from. Some example logs are shown below:
Packer will prefix any logs from plugins with the path to that plugin to make
it identifiable where the logs come from. Some example logs are shown below:
``` text
2013/06/10 21:44:43 Loading builder: custom
@ -176,8 +179,8 @@ identifiable where the logs come from. Some example logs are shown below:
As you can see, the log messages from the custom builder plugin are prefixed
with "packer-builder-custom". Log output is *extremely* helpful in debugging
issues and you're encouraged to be as verbose as you need to be in order for the
logs to be helpful.
issues and you're encouraged to be as verbose as you need to be in order for
the logs to be helpful.
### Plugin Development Tips

View File

@ -9,5 +9,5 @@ sidebar_current: 'docs-install'
# Install Packer
For detailed instructions on how to install Packer, see [this page](/intro/getting-started/install.html) in our
getting-started guide.
For detailed instructions on how to install Packer, see [this
page](/intro/getting-started/install.html) in our getting-started guide.

View File

@ -1,8 +1,8 @@
---
description: |
There are a few configuration settings that affect Packer globally by
configuring the core of Packer. These settings all have reasonable defaults,
so you generally don't have to worry about it until you want to tweak a
configuring the core of Packer. These settings all have reasonable defaults, so
you generally don't have to worry about it until you want to tweak a
configuration.
layout: docs
page_title: 'Core Configuration - Other'
@ -32,13 +32,13 @@ The format of the configuration file is basic JSON.
Below is the list of all available configuration parameters for the core
configuration file. None of these are required, since all have sane defaults.
- `plugin_min_port` and `plugin_max_port` (number) - These are the minimum and
maximum ports that Packer uses for communication with plugins, since plugin
communication happens over TCP connections on your local host. By default
these are 10,000 and 25,000, respectively. Be sure to set a fairly wide range
here, since Packer can easily use over 25 ports on a single run.
- `plugin_min_port` and `plugin_max_port` (number) - These are the minimum
and maximum ports that Packer uses for communication with plugins, since
plugin communication happens over TCP connections on your local host. By
default these are 10,000 and 25,000, respectively. Be sure to set a fairly
wide range here, since Packer can easily use over 25 ports on a single run.
- `builders`, `commands`, `post-processors`, and `provisioners` are objects that
are used to install plugins. The details of how exactly these are set is
covered in more detail in the [installing plugins documentation
- `builders`, `commands`, `post-processors`, and `provisioners` are objects
that are used to install plugins. The details of how exactly these are set
is covered in more detail in the [installing plugins documentation
page](/docs/extending/plugins.html).

View File

@ -29,25 +29,25 @@ for debugging. The key will only be emitted for cloud-based builders. The
ephemeral key will be deleted at the end of the packer run during cleanup.
For a local builder, the SSH session initiated will be visible in the detail
provided when `PACKER_LOG=1` environment variable is set prior to a build,
and you can connect to the local machine using the userid and password defined
in the kickstart or preseed associated with initializing the local VM.
provided when `PACKER_LOG=1` environment variable is set prior to a build, and
you can connect to the local machine using the userid and password defined in
the kickstart or preseed associated with initializing the local VM.
It should be noted that one of the options `-on-error` is to `retry`, the retry
of the step in question has limitations:
* the template packer is building is **not** reloaded from file.
* the resources specified from builders **are** reloaded from file.
- the template packer is building is **not** reloaded from file.
- the resources specified from builders **are** reloaded from file.
Check the specfics on your builder to confirm their behavior.
### Windows
As of Packer 0.8.1 the default WinRM communicator will emit the password for a
Remote Desktop Connection into your instance. This happens following the several
minute pause as the instance is booted. Note a .pem key is still created for
securely transmitting the password. Packer automatically decrypts the password
for you in debug mode.
Remote Desktop Connection into your instance. This happens following the
several minute pause as the instance is booted. Note a .pem key is still
created for securely transmitting the password. Packer automatically decrypts
the password for you in debug mode.
## Debugging Packer
@ -95,9 +95,9 @@ provisioner step:
amazon-ebs: No candidate version found for build-essential
This, obviously can cause problems where a build is unable to finish
successfully as the proper packages cannot be provisioned correctly. The problem
arises when cloud-init has not finished fully running on the source AMI by the
time that packer starts any provisioning steps.
successfully as the proper packages cannot be provisioned correctly. The
problem arises when cloud-init has not finished fully running on the source AMI
by the time that packer starts any provisioning steps.
Adding the following provisioner to the packer template, allows for the
cloud-init process to fully finish before packer starts provisioning the source
@ -124,7 +124,8 @@ error initializing provisioner 'powershell': fork/exec /files/go/bin/packer:
too many open files
```
On Unix systems, you can check what your file descriptor limit is with `ulimit -Sn`. You should check with your OS vendor on how to raise this limit.
On Unix systems, you can check what your file descriptor limit is with
`ulimit -Sn`. You should check with your OS vendor on how to raise this limit.
## Issues when using long temp directory

View File

@ -12,19 +12,20 @@ each can be found below:
- `PACKER_CACHE_DIR` - The location of the packer cache.
- `PACKER_CONFIG` - The location of the core configuration file. The format of
the configuration file is basic JSON. See the [core configuration
- `PACKER_CONFIG` - The location of the core configuration file. The format
of the configuration file is basic JSON. See the [core configuration
page](/docs/other/core-configuration.html).
- `PACKER_LOG` - Setting this to any value other than "" (empty string) or "0" will enable the logger. See the
[debugging page](/docs/other/debugging.html).
- `PACKER_LOG_PATH` - The location of the log file. Note: `PACKER_LOG` must be
set for any logging to occur. See the [debugging
- `PACKER_LOG` - Setting this to any value other than "" (empty string) or
"0" will enable the logger. See the [debugging
page](/docs/other/debugging.html).
- `PACKER_NO_COLOR` - Setting this to any value will disable color in
the terminal.
- `PACKER_LOG_PATH` - The location of the log file. Note: `PACKER_LOG` must
be set for any logging to occur. See the [debugging
page](/docs/other/debugging.html).
- `PACKER_NO_COLOR` - Setting this to any value will disable color in the
terminal.
- `PACKER_PLUGIN_MAX_PORT` - The maximum port that Packer uses for
communication with plugins, since plugin communication happens over TCP
@ -41,8 +42,9 @@ each can be found below:
new versions of Packer. If you want to disable this for security or privacy
reasons, you can set this environment variable to `1`.
- `TMPDIR` (Unix) / `TMP` (Windows) - The location of the directory used for temporary files (defaults
to `/tmp` on Linux/Unix and `%USERPROFILE%\AppData\Local\Temp` on Windows
Vista and above). It might be necessary to customize it when working
with large files since `/tmp` is a memory-backed filesystem in some Linux
distributions in which case `/var/tmp` might be preferred.
- `TMPDIR` (Unix) / `TMP` (Windows) - The location of the directory used for
temporary files (defaults to `/tmp` on Linux/Unix and
`%USERPROFILE%\AppData\Local\Temp` on Windows Vista and above). It might be
necessary to customize it when working with large files since `/tmp` is a
memory-backed filesystem in some Linux distributions in which case
`/var/tmp` might be preferred.

View File

@ -16,44 +16,44 @@ various builders and imports it to an Alicloud ECS Image.
## How Does it Work?
The import process operates by making a temporary copy of the RAW or VHD to an OSS
bucket, and calling an import task in ECS on the RAW or VHD file. Once
The import process operates by making a temporary copy of the RAW or VHD to an
OSS bucket, and calling an import task in ECS on the RAW or VHD file. Once
completed, an Alicloud ECS Image is returned. The temporary RAW or VHD copy in
OSS can be discarded after the import is complete.
## Configuration
There are some configuration options available for the post-processor. There are
two categories: required and optional parameters.
There are some configuration options available for the post-processor. There
are two categories: required and optional parameters.
### Required:
- `access_key` (string) - This is the Alicloud access key. It must be provided,
but it can also be sourced from the `ALICLOUD_ACCESS_KEY` environment
variable.
- `access_key` (string) - This is the Alicloud access key. It must be
provided, but it can also be sourced from the `ALICLOUD_ACCESS_KEY`
environment variable.
- `secret_key` (string) - This is the Alicloud secret key. It must be provided,
but it can also be sourced from the `ALICLOUD_SECRET_KEY` environment
variable.
- `secret_key` (string) - This is the Alicloud secret key. It must be
provided, but it can also be sourced from the `ALICLOUD_SECRET_KEY`
environment variable.
- `region` (string) - This is the Alicloud region. It must be provided, but it
can also be sourced from the `ALICLOUD_REGION` environment variables.
- `region` (string) - This is the Alicloud region. It must be provided, but
it can also be sourced from the `ALICLOUD_REGION` environment variables.
- `image_name` (string) - The name of the user-defined image, \[2, 128\] English
or Chinese characters. It must begin with an uppercase/lowercase letter or
a Chinese character, and may contain numbers, `_` or `-`. It cannot begin
with <http://> or <https://>.
- `image_name` (string) - The name of the user-defined image, \[2, 128\]
English or Chinese characters. It must begin with an uppercase/lowercase
letter or a Chinese character, and may contain numbers, `_` or `-`. It
cannot begin with <http://> or <https://>.
- `oss_bucket_name` (string) - The name of the OSS bucket where the RAW or VHD
file will be copied to for import. If the Bucket isn't exist, post-process
will create it for you.
- `oss_bucket_name` (string) - The name of the OSS bucket where the RAW or
VHD file will be copied to for import. If the Bucket isn't exist,
post-process will create it for you.
- `image_os_type` (string) - Type of the OS linux/windows
- `image_platform` (string) - platform such `CentOS`
- `image_architecture` (string) - Platform type of the image system:i386
| x86\_64
- `image_architecture` (string) - Platform type of the image system:i386 \|
x86\_64
- `format` (string) - The format of the image for import, now alicloud only
support RAW and VHD.
@ -63,21 +63,22 @@ two categories: required and optional parameters.
- `oss_key_name` (string) - The name of the object key in `oss_bucket_name`
where the RAW or VHD file will be copied to for import.
- `skip_clean` (boolean) - Whether we should skip removing the RAW or VHD file
uploaded to OSS after the import process has completed. `true` means that we
should leave it in the OSS bucket, `false` means to clean it out. Defaults to
`false`.
- `skip_clean` (boolean) - Whether we should skip removing the RAW or VHD
file uploaded to OSS after the import process has completed. `true` means
that we should leave it in the OSS bucket, `false` means to clean it out.
Defaults to `false`.
- `image_description` (string) - The description of the image, with a length
limit of 0 to 256 characters. Leaving it blank means null, which is the
default value. It cannot begin with <http://> or <https://>.
- `image_force_delete` (boolean) - If this value is true, when the target image
name is duplicated with an existing image, it will delete the existing image
and then create the target image, otherwise, the creation will fail. The
default value is false.
- `image_force_delete` (boolean) - If this value is true, when the target
image name is duplicated with an existing image, it will delete the
existing image and then create the target image, otherwise, the creation
will fail. The default value is false.
- `image_system_size` (number) - Size of the system disk, in GB, values range:
- `image_system_size` (number) - Size of the system disk, in GB, values
range:
- cloud - 5 ~ 2000
- cloud\_efficiency - 20 ~ 2048
- cloud\_ssd - 20 ~ 2048

View File

@ -11,32 +11,52 @@ sidebar_current: 'docs-post-processors-amazon-import'
Type: `amazon-import`
The Packer Amazon Import post-processor takes an OVA artifact from various builders and imports it to an AMI available to Amazon Web Services EC2.
The Packer Amazon Import post-processor takes an OVA artifact from various
builders and imports it to an AMI available to Amazon Web Services EC2.
~&gt; This post-processor is for advanced users. It depends on specific IAM roles inside AWS and is best used with images that operate with the EC2 configuration model (eg, cloud-init for Linux systems). Please ensure you read the [prerequisites for import](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/VMImportPrerequisites.html) before using this post-processor.
~&gt; This post-processor is for advanced users. It depends on specific IAM
roles inside AWS and is best used with images that operate with the EC2
configuration model (eg, cloud-init for Linux systems). Please ensure you read
the [prerequisites for
import](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/VMImportPrerequisites.html)
before using this post-processor.
## How Does it Work?
The import process operates making a temporary copy of the OVA to an S3 bucket, and calling an import task in EC2 on the OVA file. Once completed, an AMI is returned containing the converted virtual machine. The temporary OVA copy in S3 can be discarded after the import is complete.
The import process operates making a temporary copy of the OVA to an S3 bucket,
and calling an import task in EC2 on the OVA file. Once completed, an AMI is
returned containing the converted virtual machine. The temporary OVA copy in S3
can be discarded after the import is complete.
The import process itself run by AWS includes modifications to the image uploaded, to allow it to boot and operate in the AWS EC2 environment. However, not all modifications required to make the machine run well in EC2 are performed. Take care around console output from the machine, as debugging can be very difficult without it. You may also want to include tools suitable for instances in EC2 such as `cloud-init` for Linux.
The import process itself run by AWS includes modifications to the image
uploaded, to allow it to boot and operate in the AWS EC2 environment. However,
not all modifications required to make the machine run well in EC2 are
performed. Take care around console output from the machine, as debugging can
be very difficult without it. You may also want to include tools suitable for
instances in EC2 such as `cloud-init` for Linux.
Further information about the import process can be found in AWS's [EC2 Import/Export Instance documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instances_of_your_vm.html).
Further information about the import process can be found in AWS's [EC2
Import/Export Instance
documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instances_of_your_vm.html).
## Configuration
There are some configuration options available for the post-processor. They are
segmented below into two categories: required and optional parameters.
Within each category, the available configuration keys are alphabetized.
segmented below into two categories: required and optional parameters. Within
each category, the available configuration keys are alphabetized.
Required:
- `access_key` (string) - The access key used to communicate with AWS. [Learn
how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials)
- `region` (string) - The name of the region, such as `us-east-1` in which to upload the OVA file to S3 and create the AMI. A list of valid regions can be obtained with AWS CLI tools or by consulting the AWS website.
- `region` (string) - The name of the region, such as `us-east-1` in which to
upload the OVA file to S3 and create the AMI. A list of valid regions can
be obtained with AWS CLI tools or by consulting the AWS website.
- `s3_bucket_name` (string) - The name of the S3 bucket where the OVA file will be copied to for import. This bucket must exist when the post-processor is run.
- `s3_bucket_name` (string) - The name of the S3 bucket where the OVA file
will be copied to for import. This bucket must exist when the
post-processor is run.
- `secret_key` (string) - The secret key used to communicate with AWS. [Learn
how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials)
@ -53,9 +73,9 @@ Optional:
accept any value other than "all".
- `ami_name` (string) - The name of the ami within the console. If not
specified, this will default to something like `ami-import-sfwerwf`.
Please note, specifying this option will result in a slightly longer
execution time.
specified, this will default to something like `ami-import-sfwerwf`. Please
note, specifying this option will result in a slightly longer execution
time.
- `ami_users` (array of strings) - A list of account IDs that have access to
launch the imported AMI. By default no additional users other than the user
@ -65,29 +85,34 @@ Optional:
provider whose API is compatible with aws EC2. Specify another endpoint
like this `https://ec2.custom.endpoint.com`.
- `license_type` (string) - The license type to be used for the Amazon Machine
Image (AMI) after importing. Valid values: `AWS` or `BYOL` (default).
For more details regarding licensing, see
- `license_type` (string) - The license type to be used for the Amazon
Machine Image (AMI) after importing. Valid values: `AWS` or `BYOL`
(default). For more details regarding licensing, see
[Prerequisites](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/VMImportPrerequisites.html)
in the VM Import/Export User Guide.
- `mfa_code` (string) - The MFA [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm)
code. This should probably be a user variable since it changes all the time.
- `mfa_code` (string) - The MFA
[TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm)
code. This should probably be a user variable since it changes all the
time.
- `profile` (string) - The profile to use in the shared credentials file for
AWS. See Amazon's documentation on [specifying
profiles](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-profiles)
for more details.
- `role_name` (string) - The name of the role to use when not using the default role, 'vmimport'
- `role_name` (string) - The name of the role to use when not using the
default role, 'vmimport'
- `s3_key_name` (string) - The name of the key in `s3_bucket_name` where the
OVA file will be copied to for import. If not specified, this will default
to "packer-import-{{timestamp}}.ova". This key (i.e., the uploaded OVA) will
be removed after import, unless `skip_clean` is `true`.
to "packer-import-{{timestamp}}.ova". This key (i.e., the uploaded OVA)
will be removed after import, unless `skip_clean` is `true`.
- `skip_clean` (boolean) - Whether we should skip removing the OVA file uploaded to S3 after the
import process has completed. "true" means that we should leave it in the S3 bucket, "false" means to clean it out. Defaults to `false`.
- `skip_clean` (boolean) - Whether we should skip removing the OVA file
uploaded to S3 after the import process has completed. "true" means that we
should leave it in the S3 bucket, "false" means to clean it out. Defaults
to `false`.
- `skip_region_validation` (boolean) - Set to true if you want to skip
validation of the region configuration option. Default `false`.
@ -102,7 +127,9 @@ Optional:
## Basic Example
Here is a basic example. This assumes that the builder has produced an OVA artifact for us to work with, and IAM roles for import exist in the AWS account being imported into.
Here is a basic example. This assumes that the builder has produced an OVA
artifact for us to work with, and IAM roles for import exist in the AWS account
being imported into.
``` json
{
@ -120,7 +147,8 @@ Here is a basic example. This assumes that the builder has produced an OVA artif
## VMWare Example
This is an example that uses `vmware-iso` builder and exports the `.ova` file using ovftool.
This is an example that uses `vmware-iso` builder and exports the `.ova` file
using ovftool.
``` json
"post-processors" : [
@ -151,24 +179,31 @@ This is an example that uses `vmware-iso` builder and exports the `.ova` file us
```
## Troubleshooting Timeouts
The amazon-import feature can take a long time to upload and convert your OVAs
into AMIs; if you find that your build is failing because you have exceeded your
max retries or find yourself being rate limited, you can override the max
retries and the delay in between retries by setting the environment variables
`AWS_MAX_ATTEMPTS` and `AWS_POLL_DELAY_SECONDS` on the machine running the
Packer build. By default, the waiter that waits for your image to be imported
from s3 will retry for up to an hour: it retries up to 720 times with a 5
second delay in between retries.
This is dramatically higher than many of our other waiters, to account for how
long this process can take.
The amazon-import feature can take a long time to upload and convert your OVAs
into AMIs; if you find that your build is failing because you have exceeded
your max retries or find yourself being rate limited, you can override the max
retries and the delay in between retries by setting the environment variables
`AWS_MAX_ATTEMPTS` and `AWS_POLL_DELAY_SECONDS` on the machine running the
Packer build. By default, the waiter that waits for your image to be imported
from s3 will retry for up to an hour: it retries up to 720 times with a 5
second delay in between retries.
This is dramatically higher than many of our other waiters, to account for how
long this process can take.
-&gt; **Note:** Packer can also read the access key and secret access key from
environmental variables. See the configuration reference in the section above
for more information on what environmental variables Packer will look for.
This will take the OVA generated by a builder and upload it to S3. In this case, an existing bucket called `importbucket` in the `us-east-1` region will be where the copy is placed. The key name of the copy will be a default name generated by packer.
This will take the OVA generated by a builder and upload it to S3. In this
case, an existing bucket called `importbucket` in the `us-east-1` region will
be where the copy is placed. The key name of the copy will be a default name
generated by packer.
Once uploaded, the import process will start, creating an AMI in the "us-east-1" region with a "Description" tag applied to both the AMI and the snapshots associated with it. Note: the import process does not allow you to name the AMI, the name is automatically generated by AWS.
Once uploaded, the import process will start, creating an AMI in the
"us-east-1" region with a "Description" tag applied to both the AMI and the
snapshots associated with it. Note: the import process does not allow you to
name the AMI, the name is automatically generated by AWS.
After tagging is completed, the OVA uploaded to S3 will be removed.

View File

@ -4,8 +4,8 @@ description: |
builder or post-processor. All downstream post-processors will see the new
artifacts you specify. The primary use-case is to build artifacts inside a
packer builder -- for example, spinning up an EC2 instance to build a docker
container -- and then extracting the docker container and throwing away the
EC2 instance.
container -- and then extracting the docker container and throwing away the EC2
instance.
layout: docs
page_title: 'Artifice - Post-Processors'
sidebar_current: 'docs-post-processors-artifice'
@ -15,22 +15,23 @@ sidebar_current: 'docs-post-processors-artifice'
Type: `artifice`
The artifice post-processor overrides the artifact list from an upstream builder
or post-processor. All downstream post-processors will see the new artifacts you
specify. The primary use-case is to build artifacts inside a packer builder --
for example, spinning up an EC2 instance to build a docker container -- and then
extracting the docker container and throwing away the EC2 instance.
The artifice post-processor overrides the artifact list from an upstream
builder or post-processor. All downstream post-processors will see the new
artifacts you specify. The primary use-case is to build artifacts inside a
packer builder -- for example, spinning up an EC2 instance to build a docker
container -- and then extracting the docker container and throwing away the EC2
instance.
After overriding the artifact with artifice, you can use it with other
post-processors like
[compress](https://www.packer.io/docs/post-processors/compress.html),
[docker-push](https://www.packer.io/docs/post-processors/docker-push.html),
or a third-party post-processor.
[docker-push](https://www.packer.io/docs/post-processors/docker-push.html), or
a third-party post-processor.
Artifice allows you to use the familiar packer workflow to create a fresh,
stateless build environment for each build on the infrastructure of your
choosing. You can use this to build just about anything: buildpacks, containers,
jars, binaries, tarballs, msi installers, and more.
choosing. You can use this to build just about anything: buildpacks,
containers, jars, binaries, tarballs, msi installers, and more.
## Workflow
@ -41,8 +42,7 @@ Artifice helps you tie together a few other packer features:
- A file provisioner, which downloads the artifact from the VM
- The artifice post-processor, which identifies which files have been
downloaded from the VM
- Additional post-processors, which push the artifact to Docker
hub, etc.
- Additional post-processors, which push the artifact to Docker hub, etc.
You will want to perform as much work as possible inside the VM. Ideally the
only other post-processor you need after artifice is one that uploads your
@ -122,9 +122,10 @@ another builder.
**Notice that there are two sets of square brackets in the post-processor
section.** This creates a post-processor chain, where the output of the
proceeding artifact is passed to subsequent post-processors. If you use only one
set of square braces the post-processors will run individually against the build
artifact (the vmx file in this case) and it will not have the desired result.
proceeding artifact is passed to subsequent post-processors. If you use only
one set of square braces the post-processors will run individually against the
build artifact (the vmx file in this case) and it will not have the desired
result.
``` json
{

View File

@ -1,11 +1,11 @@
---
description: |
The checksum post-processor computes specified checksum for the artifact list
from an upstream builder or post-processor. All downstream post-processors
will see the new artifacts. The primary use-case is compute checksum for
artifacts allows to verify it later. So firstly this post-processor get
artifact, compute it checksum and pass to next post-processor original
artifacts and checksum files.
from an upstream builder or post-processor. All downstream post-processors will
see the new artifacts. The primary use-case is compute checksum for artifacts
allows to verify it later. So firstly this post-processor get artifact, compute
it checksum and pass to next post-processor original artifacts and checksum
files.
layout: docs
page_title: 'Checksum - Post-Processors'
sidebar_current: 'docs-post-processors-checksum'
@ -24,8 +24,8 @@ After computes checksum for artifacts, you can use new artifacts with other
post-processors like
[artifice](https://www.packer.io/docs/post-processors/artifice.html),
[compress](https://www.packer.io/docs/post-processors/compress.html),
[docker-push](https://www.packer.io/docs/post-processors/docker-push.html),
or a third-party post-processor.
[docker-push](https://www.packer.io/docs/post-processors/docker-push.html), or
a third-party post-processor.
## Basic example
@ -46,8 +46,8 @@ Optional parameters:
- `output` (string) - Specify filename to store checksums. This defaults to
`packer_{{.BuildName}}_{{.BuilderType}}_{{.ChecksumType}}.checksum`. For
example, if you had a builder named `database`, you might see the file
written as `packer_database_docker_md5.checksum`. The following variables are
available to use in the output template:
written as `packer_database_docker_md5.checksum`. The following variables
are available to use in the output template:
- `BuildName`: The name of the builder that produced the artifact.
- `BuilderType`: The type of builder used to produce the artifact.

View File

@ -23,10 +23,9 @@ filename: `packer_{{.BuildName}}_{{.BuilderType}}`. If you want to change this
you will need to specify the `output` option.
- `output` (string) - The path to save the compressed archive. The archive
format is inferred from the filename. E.g. `.tar.gz` will be a
gzipped tarball. `.zip` will be a zip file. If the extension can't be
detected packer defaults to `.tar.gz` behavior but will not change
the filename.
format is inferred from the filename. E.g. `.tar.gz` will be a gzipped
tarball. `.zip` will be a zip file. If the extension can't be detected
packer defaults to `.tar.gz` behavior but will not change the filename.
You can use `{{.BuildName}}` and `{{.BuilderType}}` in your output path. If
you are executing multiple builders in parallel you should make sure

View File

@ -15,8 +15,8 @@ Type: `docker-import`
The Packer Docker import post-processor takes an artifact from the [docker
builder](/docs/builders/docker.html) and imports it with Docker locally. This
allows you to apply a repository and tag to the image and lets you use the other
Docker post-processors such as
allows you to apply a repository and tag to the image and lets you use the
other Docker post-processors such as
[docker-push](/docs/post-processors/docker-push.html) to push the image to a
registry.
@ -27,7 +27,8 @@ is optional.
- `repository` (string) - The repository of the imported image.
- `tag` (string) - The tag for the imported image. By default this is not set.
- `tag` (string) - The tag for the imported image. By default this is not
set.
## Example

View File

@ -19,26 +19,28 @@ pushes it to a Docker registry.
This post-processor has only optional configuration:
- `aws_access_key` (string) - The AWS access key used to communicate with AWS.
[Learn how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials)
- `aws_access_key` (string) - The AWS access key used to communicate with
AWS. [Learn how to set
this.](/docs/builders/amazon.html#specifying-amazon-credentials)
- `aws_secret_key` (string) - The AWS secret key used to communicate with AWS.
[Learn how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials)
- `aws_secret_key` (string) - The AWS secret key used to communicate with
AWS. [Learn how to set
this.](/docs/builders/amazon.html#specifying-amazon-credentials)
- `aws_token` (string) - The AWS access token to use. This is different from the
access key and secret key. If you're not sure what this is, then you
- `aws_token` (string) - The AWS access token to use. This is different from
the access key and secret key. If you're not sure what this is, then you
probably don't need it. This will also be read from the `AWS_SESSION_TOKEN`
environmental variable.
- `aws_profile` (string) - The AWS shared credentials profile used to communicate with AWS.
[Learn how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials)
- `aws_profile` (string) - The AWS shared credentials profile used to
communicate with AWS. [Learn how to set
this.](/docs/builders/amazon.html#specifying-amazon-credentials)
- `ecr_login` (boolean) - Defaults to false. If true, the post-processor
will login in order to push the image to
[Amazon EC2 Container Registry (ECR)](https://aws.amazon.com/ecr/).
The post-processor only logs in for the duration of the push. If true
`login_server` is required and `login`, `login_username`, and
`login_password` will be ignored.
- `ecr_login` (boolean) - Defaults to false. If true, the post-processor will
login in order to push the image to [Amazon EC2 Container Registry
(ECR)](https://aws.amazon.com/ecr/). The post-processor only logs in for
the duration of the push. If true `login_server` is required and `login`,
`login_username`, and `login_password` will be ignored.
- `login` (boolean) - Defaults to false. If true, the post-processor will
login prior to pushing. For log into ECR see `ecr_login`.
@ -49,10 +51,10 @@ This post-processor has only optional configuration:
- `login_server` (string) - The server address to login to.
-&gt; **Note:** When using *Docker Hub* or *Quay* registry servers, `login` must to be
set to `true` and `login_username`, **and** `login_password`
must to be set to your registry credentials. When using Docker Hub,
`login_server` can be omitted.
-&gt; **Note:** When using *Docker Hub* or *Quay* registry servers, `login`
must to be set to `true` and `login_username`, **and** `login_password` must to
be set to your registry credentials. When using Docker Hub, `login_server` can
be omitted.
-&gt; **Note:** If you login using the credentials above, the post-processor
will automatically log you out afterwards (just the server specified).

View File

@ -1,9 +1,9 @@
---
description: |
The Packer Docker Save post-processor takes an artifact from the docker
builder that was committed and saves it to a file. This is similar to
exporting the Docker image directly from the builder, except that it preserves
the hierarchy of images and metadata.
The Packer Docker Save post-processor takes an artifact from the docker builder
that was committed and saves it to a file. This is similar to exporting the
Docker image directly from the builder, except that it preserves the hierarchy
of images and metadata.
layout: docs
page_title: 'Docker Save - Post-Processors'
sidebar_current: 'docs-post-processors-docker-save'

View File

@ -25,17 +25,18 @@ that this works with committed resources, rather than exported.
## Configuration
The configuration for this post-processor requires `repository`, all other settings
are optional.
The configuration for this post-processor requires `repository`, all other
settings are optional.
- `repository` (string) - The repository of the image.
- `tag` (string) - The tag for the image. By default this is not set.
- `force` (boolean) - If true, this post-processor forcibly tag the image even
if tag name is collided. Default to `false`.
But it will be ignored if Docker &gt;= 1.12.0 was detected,
since the `force` option was removed after 1.12.0. [reference](https://docs.docker.com/engine/deprecated/#/f-flag-on-docker-tag)
- `force` (boolean) - If true, this post-processor forcibly tag the image
even if tag name is collided. Default to `false`. But it will be ignored if
Docker &gt;= 1.12.0 was detected, since the `force` option was removed
after 1.12.0.
[reference](https://docs.docker.com/engine/deprecated/#/f-flag-on-docker-tag)
## Example

View File

@ -1,9 +1,8 @@
---
description: |
The Google Compute Image Exporter post-processor exports an image from a
Packer googlecompute builder run and uploads it to Google Cloud Storage. The
exported images can be easily shared and uploaded to other Google Cloud
Projects.
The Google Compute Image Exporter post-processor exports an image from a Packer
googlecompute builder run and uploads it to Google Cloud Storage. The exported
images can be easily shared and uploaded to other Google Cloud Projects.
layout: docs
page_title: 'Google Compute Image Exporter - Post-Processors'
sidebar_current: 'docs-post-processors-googlecompute-export'
@ -13,14 +12,15 @@ sidebar_current: 'docs-post-processors-googlecompute-export'
Type: `googlecompute-export`
The Google Compute Image Exporter post-processor exports the resultant image from a
googlecompute build as a gzipped tarball to Google Cloud Storage (GCS).
The Google Compute Image Exporter post-processor exports the resultant image
from a googlecompute build as a gzipped tarball to Google Cloud Storage (GCS).
The exporter uses the same Google Cloud Platform (GCP) project and authentication
credentials as the googlecompute build that produced the image. A temporary VM is
started in the GCP project using these credentials. The VM mounts the built image as
a disk then dumps, compresses, and tars the image. The VM then uploads the tarball
to the provided GCS `paths` using the same credentials.
The exporter uses the same Google Cloud Platform (GCP) project and
authentication credentials as the googlecompute build that produced the image.
A temporary VM is started in the GCP project using these credentials. The VM
mounts the built image as a disk then dumps, compresses, and tars the image.
The VM then uploads the tarball to the provided GCS `paths` using the same
credentials.
As such, the authentication credentials that built the image must have write
permissions to the GCS `paths`.
@ -34,19 +34,20 @@ permissions to the GCS `paths`.
### Optional
- `keep_input_artifact` (boolean) - If true, do not delete the Google Compute Engine
(GCE) image being exported.
- `keep_input_artifact` (boolean) - If true, do not delete the Google Compute
Engine (GCE) image being exported.
## Basic Example
The following example builds a GCE image in the project, `my-project`, with an
account whose keyfile is `account.json`. After the image build, a temporary VM will
be created to export the image as a gzipped tarball to
`gs://mybucket1/path/to/file1.tar.gz` and `gs://mybucket2/path/to/file2.tar.gz`.
`keep_input_artifact` is true, so the GCE image won't be deleted after the export.
account whose keyfile is `account.json`. After the image build, a temporary VM
will be created to export the image as a gzipped tarball to
`gs://mybucket1/path/to/file1.tar.gz` and
`gs://mybucket2/path/to/file2.tar.gz`. `keep_input_artifact` is true, so the
GCE image won't be deleted after the export.
In order for this example to work, the account associated with `account.json` must
have write access to both `gs://mybucket1/path/to/file1.tar.gz` and
In order for this example to work, the account associated with `account.json`
must have write access to both `gs://mybucket1/path/to/file1.tar.gz` and
`gs://mybucket2/path/to/file2.tar.gz`.
``` json

View File

@ -2,7 +2,6 @@
description: |
The Google Compute Image Import post-processor takes a compressed raw disk
image and imports it to a GCE image available to Google Compute Engine.
layout: docs
page_title: 'Google Compute Image Import - Post-Processors'
sidebar_current: 'docs-post-processors-googlecompute-import'
@ -15,52 +14,66 @@ Type: `googlecompute-import`
The Google Compute Image Import post-processor takes a compressed raw disk
image and imports it to a GCE image available to Google Compute Engine.
~&gt; This post-processor is for advanced users. Please ensure you read the [GCE import documentation](https://cloud.google.com/compute/docs/images/import-existing-image) before using this post-processor.
~&gt; This post-processor is for advanced users. Please ensure you read the
[GCE import
documentation](https://cloud.google.com/compute/docs/images/import-existing-image)
before using this post-processor.
## How Does it Work?
The import process operates by uploading a temporary copy of the compressed raw disk image
to a GCS bucket, and calling an import task in GCP on the raw disk file. Once completed, a
GCE image is created containing the converted virtual machine. The temporary raw disk image
copy in GCS can be discarded after the import is complete.
The import process operates by uploading a temporary copy of the compressed raw
disk image to a GCS bucket, and calling an import task in GCP on the raw disk
file. Once completed, a GCE image is created containing the converted virtual
machine. The temporary raw disk image copy in GCS can be discarded after the
import is complete.
Google Cloud has very specific requirements for images being imported. Please see the
[GCE import documentation](https://cloud.google.com/compute/docs/images/import-existing-image)
Google Cloud has very specific requirements for images being imported. Please
see the [GCE import
documentation](https://cloud.google.com/compute/docs/images/import-existing-image)
for details.
## Configuration
### Required
- `account_file` (string) - The JSON file containing your account credentials.
- `account_file` (string) - The JSON file containing your account
credentials.
- `bucket` (string) - The name of the GCS bucket where the raw disk image
will be uploaded.
will be uploaded.
- `image_name` (string) - The unique name of the resulting image.
- `project_id` (string) - The project ID where the GCS bucket exists and
where the GCE image is stored.
where the GCE image is stored.
### Optional
- `gcs_object_name` (string) - The name of the GCS object in `bucket` where the RAW disk image will be copied for import. Defaults to "packer-import-{{timestamp}}.tar.gz".
- `gcs_object_name` (string) - The name of the GCS object in `bucket` where
the RAW disk image will be copied for import. Defaults to
"packer-import-{{timestamp}}.tar.gz".
- `image_description` (string) - The description of the resulting image.
- `image_family` (string) - The name of the image family to which the resulting image belongs.
- `image_family` (string) - The name of the image family to which the
resulting image belongs.
- `image_labels` (object of key/value strings) - Key/value pair labels to apply to the created image.
- `image_labels` (object of key/value strings) - Key/value pair labels to
apply to the created image.
- `keep_input_artifact` (boolean) - if true, do not delete the compressed RAW disk image. Defaults to false.
- `skip_clean` (boolean) - Skip removing the TAR file uploaded to the GCS bucket after the import process has completed. "true" means that we should leave it in the GCS bucket, "false" means to clean it out. Defaults to `false`.
- `keep_input_artifact` (boolean) - if true, do not delete the compressed RAW
disk image. Defaults to false.
- `skip_clean` (boolean) - Skip removing the TAR file uploaded to the GCS
bucket after the import process has completed. "true" means that we should
leave it in the GCS bucket, "false" means to clean it out. Defaults to
`false`.
## Basic Example
Here is a basic example. This assumes that the builder has produced an compressed
raw disk image artifact for us to work with, and that the GCS bucket has been created.
Here is a basic example. This assumes that the builder has produced an
compressed raw disk image artifact for us to work with, and that the GCS bucket
has been created.
``` json
{
@ -70,18 +83,15 @@ raw disk image artifact for us to work with, and that the GCS bucket has been cr
"bucket": "my-bucket",
"image_name": "my-gce-image"
}
```
## QEMU Builder Example
Here is a complete example for building a Fedora 28 server GCE image. For this example
packer was run from a CentOS 7 server with KVM installed. The CentOS 7 server was running
in GCE with the nested hypervisor feature enabled.
Here is a complete example for building a Fedora 28 server GCE image. For this
example packer was run from a CentOS 7 server with KVM installed. The CentOS 7
server was running in GCE with the nested hypervisor feature enabled.
```
$ packer build -var serial=$(tty) build.json
```
$ packer build -var serial=$(tty) build.json
``` json
{

View File

@ -10,6 +10,6 @@ sidebar_current: 'docs-post-processors'
# Post-Processors
Post-processors run after the image is built by the builder and provisioned by
the provisioner(s). Post-processors are optional, and they can be used to upload
artifacts, re-package, or more. For more information about post-processors,
please choose an option from the sidebar.
the provisioner(s). Post-processors are optional, and they can be used to
upload artifacts, re-package, or more. For more information about
post-processors, please choose an option from the sidebar.

View File

@ -1,7 +1,7 @@
---
description: |
The manifest post-processor writes a JSON file with the build artifacts and
IDs from a packer run.
The manifest post-processor writes a JSON file with the build artifacts and IDs
from a packer run.
layout: docs
page_title: 'Manifest - Post-Processors'
sidebar_current: 'docs-post-processors-manifest'
@ -11,24 +11,38 @@ sidebar_current: 'docs-post-processors-manifest'
Type: `manifest`
The manifest post-processor writes a JSON file with a list of all of the artifacts packer produces during a run. If your packer template includes multiple builds, this helps you keep track of which output artifacts (files, AMI IDs, docker containers, etc.) correspond to each build.
The manifest post-processor writes a JSON file with a list of all of the
artifacts packer produces during a run. If your packer template includes
multiple builds, this helps you keep track of which output artifacts (files,
AMI IDs, docker containers, etc.) correspond to each build.
The manifest post-processor is invoked each time a build completes and *updates* data in the manifest file. Builds are identified by name and type, and include their build time, artifact ID, and file list.
The manifest post-processor is invoked each time a build completes and
*updates* data in the manifest file. Builds are identified by name and type,
and include their build time, artifact ID, and file list.
If packer is run with the `-force` flag the manifest file will be truncated automatically during each packer run. Otherwise, subsequent builds will be added to the file. You can use the timestamps to see which is the latest artifact.
If packer is run with the `-force` flag the manifest file will be truncated
automatically during each packer run. Otherwise, subsequent builds will be
added to the file. You can use the timestamps to see which is the latest
artifact.
You can specify manifest more than once and write each build to its own file, or write all builds to the same file. For simple builds manifest only needs to be specified once (see below) but you can also chain it together with other post-processors such as Docker and Artifice.
You can specify manifest more than once and write each build to its own file,
or write all builds to the same file. For simple builds manifest only needs to
be specified once (see below) but you can also chain it together with other
post-processors such as Docker and Artifice.
## Configuration
### Optional:
- `output` (string) The manifest will be written to this file. This defaults to `packer-manifest.json`.
- `strip_path` (boolean) Write only filename without the path to the manifest file. This defaults to false.
- `output` (string) The manifest will be written to this file. This defaults
to `packer-manifest.json`.
- `strip_path` (boolean) Write only filename without the path to the manifest
file. This defaults to false.
### Example Configuration
You can simply add `{"type":"manifest"}` to your post-processor section. Below is a more verbose example:
You can simply add `{"type":"manifest"}` to your post-processor section. Below
is a more verbose example:
``` json
{
@ -65,11 +79,13 @@ An example manifest file looks like:
}
```
If the build is run again, the new build artifacts will be added to the manifest file rather than replacing it. It is possible to grab specific build artifacts from the manifest by using `packer_run_uuid`.
If the build is run again, the new build artifacts will be added to the
manifest file rather than replacing it. It is possible to grab specific build
artifacts from the manifest by using `packer_run_uuid`.
The above manifest was generated with this packer.json:
```json
``` json
{
"builders": [
{

View File

@ -29,16 +29,17 @@ The example below is fully functional.
## Configuration Reference
The reference of available configuration options is listed below. The only
required element is either "inline" or "script". Every other option is optional.
required element is either "inline" or "script". Every other option is
optional.
Exactly *one* of the following is required:
- `command` (string) - This is a single command to execute. It will be written
to a temporary file and run using the `execute_command` call below.
- `command` (string) - This is a single command to execute. It will be
written to a temporary file and run using the `execute_command` call below.
- `inline` (array of strings) - This is an array of commands to execute. The
commands are concatenated by newlines and turned into a single file, so they
are all executed within the same context. This allows you to change
commands are concatenated by newlines and turned into a single file, so
they are all executed within the same context. This allows you to change
directories in one command and use something in the directory in the next
and so on. Inline scripts are the easiest way to pull off simple tasks
within the machine.
@ -56,17 +57,17 @@ Optional parameters:
- `environment_vars` (array of strings) - An array of key/value pairs to
inject prior to the `execute_command`. The format should be `key=value`.
Packer injects some environmental variables by default into the environment,
as well, which are covered in the section below.
Packer injects some environmental variables by default into the
environment, as well, which are covered in the section below.
- `execute_command` (array of strings) - The command used to execute the script. By
default this is `["/bin/sh", "-c", "{{.Vars}}", "{{.Script}}"]`
on unix and `["cmd", "/c", "{{.Vars}}", "{{.Script}}"]` on windows.
This is treated as a [template engine](/docs/templates/engine.html).
There are two available variables: `Script`, which is the path to the script
to run, and `Vars`, which is the list of `environment_vars`, if configured.
If you choose to set this option, make sure that the first element in the
array is the shell program you want to use (for example, "sh" or
- `execute_command` (array of strings) - The command used to execute the
script. By default this is `["/bin/sh", "-c", "{{.Vars}}", "{{.Script}}"]`
on unix and `["cmd", "/c", "{{.Vars}}", "{{.Script}}"]` on windows. This is
treated as a [template engine](/docs/templates/engine.html). There are two
available variables: `Script`, which is the path to the script to run, and
`Vars`, which is the list of `environment_vars`, if configured. If you
choose to set this option, make sure that the first element in the array is
the shell program you want to use (for example, "sh" or
"/usr/local/bin/zsh" or even "powershell.exe" although anything other than
a flavor of the shell command language is not explicitly supported and may
be broken by assumptions made within Packer). It's worth noting that if you
@ -78,50 +79,54 @@ Optional parameters:
one element is provided, Packer will replicate past behavior by appending
your `execute_command` to the array of strings `["sh", "-c"]`. For example,
if you set `"execute_command": "foo bar"`, the final `execute_command` that
Packer runs will be ["sh", "-c", "foo bar"]. If you set `"execute_command": ["foo", "bar"]`,
the final execute_command will remain `["foo", "bar"]`.
Packer runs will be \["sh", "-c", "foo bar"\]. If you set
`"execute_command": ["foo", "bar"]`, the final execute\_command will remain
`["foo", "bar"]`.
Again, the above is only provided as a backwards compatibility fix; we
strongly recommend that you set execute_command as an array of strings.
strongly recommend that you set execute\_command as an array of strings.
- `inline_shebang` (string) - The
[shebang](http://en.wikipedia.org/wiki/Shebang_%28Unix%29) value to use when
running commands specified by `inline`. By default, this is `/bin/sh -e`. If
you're not using `inline`, then this configuration has no effect.
**Important:** If you customize this, be sure to include something like the
`-e` flag, otherwise individual steps failing won't fail the provisioner.
[shebang](http://en.wikipedia.org/wiki/Shebang_%28Unix%29) value to use
when running commands specified by `inline`. By default, this is
`/bin/sh -e`. If you're not using `inline`, then this configuration has no
effect. **Important:** If you customize this, be sure to include something
like the `-e` flag, otherwise individual steps failing won't fail the
provisioner.
- `only_on` (array of strings) - This is an array of
[runtime operating systems](https://golang.org/doc/install/source#environment)
where `shell-local` will execute. This allows you to execute `shell-local`
*only* on specific operating systems. By default, shell-local will always run
if `only_on` is not set."
- `only_on` (array of strings) - This is an array of [runtime operating
systems](https://golang.org/doc/install/source#environment) where
`shell-local` will execute. This allows you to execute `shell-local` *only*
on specific operating systems. By default, shell-local will always run if
`only_on` is not set."
- `use_linux_pathing` (bool) - This is only relevant to windows hosts. If you
are running Packer in a Windows environment with the Windows Subsystem for
Linux feature enabled, and would like to invoke a bash script rather than
invoking a Cmd script, you'll need to set this flag to true; it tells Packer
to use the linux subsystem path for your script rather than the Windows path.
(e.g. /mnt/c/path/to/your/file instead of C:/path/to/your/file). Please see
the example below for more guidance on how to use this feature. If you are
not on a Windows host, or you do not intend to use the shell-local
post-processor to run a bash script, please ignore this option.
If you set this flag to true, you still need to provide the standard windows
path to the script when providing a `script`. This is a beta feature.
- `use_linux_pathing` (bool) - This is only relevant to windows hosts. If you
are running Packer in a Windows environment with the Windows Subsystem for
Linux feature enabled, and would like to invoke a bash script rather than
invoking a Cmd script, you'll need to set this flag to true; it tells
Packer to use the linux subsystem path for your script rather than the
Windows path. (e.g. /mnt/c/path/to/your/file instead of
C:/path/to/your/file). Please see the example below for more guidance on
how to use this feature. If you are not on a Windows host, or you do not
intend to use the shell-local post-processor to run a bash script, please
ignore this option. If you set this flag to true, you still need to provide
the standard windows path to the script when providing a `script`. This is
a beta feature.
## Execute Command
To many new users, the `execute_command` is puzzling. However, it provides an
important function: customization of how the command is executed. The most
common use case for this is dealing with **sudo password prompts**. You may also
need to customize this if you use a non-POSIX shell, such as `tcsh` on FreeBSD.
common use case for this is dealing with **sudo password prompts**. You may
also need to customize this if you use a non-POSIX shell, such as `tcsh` on
FreeBSD.
### The Windows Linux Subsystem
The shell-local post-processor was designed with the idea of allowing you to run
commands in your local operating system's native shell. For Windows, we've
assumed in our defaults that this is Cmd. However, it is possible to run a
bash script as part of the Windows Linux Subsystem from the shell-local
The shell-local post-processor was designed with the idea of allowing you to
run commands in your local operating system's native shell. For Windows, we've
assumed in our defaults that this is Cmd. However, it is possible to run a bash
script as part of the Windows Linux Subsystem from the shell-local
post-processor, by modifying the `execute_command` and the `use_linux_pathing`
options in the post-processor config.
@ -136,32 +141,30 @@ still in beta. There will be some limitations as a result. For example, it will
likely not work unless both Packer and the scripts you want to run are both on
the C drive.
```
{
"builders": [
{
"type": "null",
"communicator": "none"
}
],
"provisioners": [
{
"type": "shell-local",
"environment_vars": ["PROVISIONERTEST=ProvisionerTest1"],
"execute_command": ["bash", "-c", "{{.Vars}} {{.Script}}"],
"use_linux_pathing": true,
"scripts": ["C:/Users/me/scripts/example_bash.sh"]
},
{
"type": "shell-local",
"environment_vars": ["PROVISIONERTEST=ProvisionerTest2"],
"execute_command": ["bash", "-c", "{{.Vars}} {{.Script}}"],
"use_linux_pathing": true,
"script": "C:/Users/me/scripts/example_bash.sh"
}
]
}
```
{
"builders": [
{
"type": "null",
"communicator": "none"
}
],
"provisioners": [
{
"type": "shell-local",
"environment_vars": ["PROVISIONERTEST=ProvisionerTest1"],
"execute_command": ["bash", "-c", "{{.Vars}} {{.Script}}"],
"use_linux_pathing": true,
"scripts": ["C:/Users/me/scripts/example_bash.sh"]
},
{
"type": "shell-local",
"environment_vars": ["PROVISIONERTEST=ProvisionerTest2"],
"execute_command": ["bash", "-c", "{{.Vars}} {{.Script}}"],
"use_linux_pathing": true,
"script": "C:/Users/me/scripts/example_bash.sh"
}
]
}
## Default Environmental Variables
@ -169,14 +172,15 @@ In addition to being able to specify custom environmental variables using the
`environment_vars` configuration, the provisioner automatically defines certain
commonly useful environmental variables:
- `PACKER_BUILD_NAME` is set to the
[name of the build](/docs/templates/builders.html#named-builds) that Packer is running.
- `PACKER_BUILD_NAME` is set to the [name of the
build](/docs/templates/builders.html#named-builds) that Packer is running.
This is most useful when Packer is making multiple builds and you want to
distinguish them slightly from a common provisioning script.
- `PACKER_BUILDER_TYPE` is the type of the builder that was used to create the
machine that the script is running on. This is useful if you want to run
only certain parts of the script on systems built with certain builders.
- `PACKER_BUILDER_TYPE` is the type of the builder that was used to create
the machine that the script is running on. This is useful if you want to
run only certain parts of the script on systems built with certain
builders.
## Safely Writing A Script
@ -238,105 +242,83 @@ are cleaned up.
For a shell script, that means the script **must** exit with a zero code. You
*must* be extra careful to `exit 0` when necessary.
## Usage Examples:
Example of running a .cmd file on windows:
```
{
"type": "shell-local",
"environment_vars": ["SHELLLOCALTEST=ShellTest1"],
"scripts": ["./scripts/test_cmd.cmd"]
},
```
{
"type": "shell-local",
"environment_vars": ["SHELLLOCALTEST=ShellTest1"],
"scripts": ["./scripts/test_cmd.cmd"]
},
Contents of "test_cmd.cmd":
Contents of "test\_cmd.cmd":
```
echo %SHELLLOCALTEST%
```
echo %SHELLLOCALTEST%
Example of running an inline command on windows:
Required customization: tempfile_extension
Example of running an inline command on windows: Required customization:
tempfile\_extension
```
{
"type": "shell-local",
"environment_vars": ["SHELLLOCALTEST=ShellTest2"],
"tempfile_extension": ".cmd",
"inline": ["echo %SHELLLOCALTEST%"]
},
```
{
"type": "shell-local",
"environment_vars": ["SHELLLOCALTEST=ShellTest2"],
"tempfile_extension": ".cmd",
"inline": ["echo %SHELLLOCALTEST%"]
},
Example of running a bash command on windows using WSL:
Required customizations: use_linux_pathing and execute_command
Example of running a bash command on windows using WSL: Required
customizations: use\_linux\_pathing and execute\_command
```
{
"type": "shell-local",
"environment_vars": ["SHELLLOCALTEST=ShellTest3"],
"execute_command": ["bash", "-c", "{{.Vars}} {{.Script}}"],
"use_linux_pathing": true,
"script": "./scripts/example_bash.sh"
}
```
{
"type": "shell-local",
"environment_vars": ["SHELLLOCALTEST=ShellTest3"],
"execute_command": ["bash", "-c", "{{.Vars}} {{.Script}}"],
"use_linux_pathing": true,
"script": "./scripts/example_bash.sh"
}
Contents of "example_bash.sh":
Contents of "example\_bash.sh":
```
#!/bin/bash
echo $SHELLLOCALTEST
```
#!/bin/bash
echo $SHELLLOCALTEST
Example of running a powershell script on windows:
Required customizations: env_var_format and execute_command
Example of running a powershell script on windows: Required customizations:
env\_var\_format and execute\_command
```
{
"type": "shell-local",
"environment_vars": ["SHELLLOCALTEST=ShellTest4"],
"execute_command": ["powershell.exe", "{{.Vars}} {{.Script}}"],
"env_var_format": "$env:%s=\"%s\"; ",
"script": "./scripts/example_ps.ps1"
}
```
{
"type": "shell-local",
"environment_vars": ["SHELLLOCALTEST=ShellTest4"],
"execute_command": ["powershell.exe", "{{.Vars}} {{.Script}}"],
"env_var_format": "$env:%s=\"%s\"; ",
"script": "./scripts/example_ps.ps1"
}
Example of running a powershell script on windows as "inline":
Required customizations: env_var_format, tempfile_extension, and execute_command
```
{
"type": "shell-local",
"tempfile_extension": ".ps1",
"environment_vars": ["SHELLLOCALTEST=ShellTest5"],
"execute_command": ["powershell.exe", "{{.Vars}} {{.Script}}"],
"env_var_format": "$env:%s=\"%s\"; ",
"inline": ["write-output $env:SHELLLOCALTEST"]
}
```
Example of running a powershell script on windows as "inline": Required
customizations: env\_var\_format, tempfile\_extension, and execute\_command
{
"type": "shell-local",
"tempfile_extension": ".ps1",
"environment_vars": ["SHELLLOCALTEST=ShellTest5"],
"execute_command": ["powershell.exe", "{{.Vars}} {{.Script}}"],
"env_var_format": "$env:%s=\"%s\"; ",
"inline": ["write-output $env:SHELLLOCALTEST"]
}
Example of running a bash script on linux:
```
{
"type": "shell-local",
"environment_vars": ["PROVISIONERTEST=ProvisionerTest1"],
"scripts": ["./scripts/example_bash.sh"]
}
```
{
"type": "shell-local",
"environment_vars": ["PROVISIONERTEST=ProvisionerTest1"],
"scripts": ["./scripts/example_bash.sh"]
}
Example of running a bash "inline" on linux:
```
{
"type": "shell-local",
"environment_vars": ["PROVISIONERTEST=ProvisionerTest2"],
"inline": ["echo hello",
"echo $PROVISIONERTEST"]
}
```
{
"type": "shell-local",
"environment_vars": ["PROVISIONERTEST=ProvisionerTest2"],
"inline": ["echo hello",
"echo $PROVISIONERTEST"]
}

View File

@ -2,8 +2,8 @@
description: |
The Packer Vagrant Cloud post-processor receives a Vagrant box from the
`vagrant` post-processor and pushes it to Vagrant Cloud. Vagrant Cloud hosts
and serves boxes to Vagrant, allowing you to version and distribute boxes to
an organization in a simple way.
and serves boxes to Vagrant, allowing you to version and distribute boxes to an
organization in a simple way.
layout: docs
page_title: 'Vagrant Cloud - Post-Processors'
sidebar_current: 'docs-post-processors-vagrant-cloud'
@ -16,8 +16,8 @@ Type: `vagrant-cloud`
The Packer Vagrant Cloud post-processor receives a Vagrant box from the
`vagrant` post-processor and pushes it to Vagrant Cloud. [Vagrant
Cloud](https://app.vagrantup.com/boxes/search) hosts and serves boxes to
Vagrant, allowing you to version and distribute boxes to an organization in
a simple way.
Vagrant, allowing you to version and distribute boxes to an organization in a
simple way.
You'll need to be familiar with Vagrant Cloud, have an upgraded account to
enable box hosting, and be distributing your box via the [shorthand
@ -62,11 +62,11 @@ on Vagrant Cloud, as well as authentication and version information.
Cloud, for example `hashicorp/precise64`, which is short for
`vagrantcloud.com/hashicorp/precise64`.
- `version` (string) - The version number, typically incrementing a
previous version. The version string is validated based on [Semantic
- `version` (string) - The version number, typically incrementing a previous
version. The version string is validated based on [Semantic
Versioning](http://semver.org/). The string must match a pattern that could
be semver, and doesn't validate that the version comes after your
previous versions.
be semver, and doesn't validate that the version comes after your previous
versions.
### Optional:
@ -74,16 +74,16 @@ on Vagrant Cloud, as well as authentication and version information.
Vagrant Cloud, making it active. You can manually release the version via
the API or Web UI. Defaults to false.
- `vagrant_cloud_url` (string) - Override the base URL for Vagrant Cloud. This
is useful if you're using Vagrant Private Cloud in your own network.
- `vagrant_cloud_url` (string) - Override the base URL for Vagrant Cloud.
This is useful if you're using Vagrant Private Cloud in your own network.
Defaults to `https://vagrantcloud.com/api/v1`
- `version_description` (string) - Optionally markdown text used as a
full-length and in-depth description of the version, typically for denoting
changes introduced
- `box_download_url` (string) - Optional URL for a self-hosted box. If this is
set the box will not be uploaded to the Vagrant Cloud.
- `box_download_url` (string) - Optional URL for a self-hosted box. If this
is set the box will not be uploaded to the Vagrant Cloud.
## Use with Vagrant Post-Processor

View File

@ -2,8 +2,8 @@
description: |
The Packer Vagrant post-processor takes a build and converts the artifact into
a valid Vagrant box, if it can. This lets you use Packer to automatically
create arbitrarily complex Vagrant boxes, and is in fact how the official
boxes distributed by Vagrant are created.
create arbitrarily complex Vagrant boxes, and is in fact how the official boxes
distributed by Vagrant are created.
layout: docs
page_title: 'Vagrant - Post-Processors'
sidebar_current: 'docs-post-processors-vagrant-box'
@ -13,19 +13,19 @@ sidebar_current: 'docs-post-processors-vagrant-box'
Type: `vagrant`
The Packer Vagrant post-processor takes a build and converts the artifact into a
valid [Vagrant](https://www.vagrantup.com) box, if it can. This lets you use
Packer to automatically create arbitrarily complex Vagrant boxes, and is in fact
how the official boxes distributed by Vagrant are created.
The Packer Vagrant post-processor takes a build and converts the artifact into
a valid [Vagrant](https://www.vagrantup.com) box, if it can. This lets you use
Packer to automatically create arbitrarily complex Vagrant boxes, and is in
fact how the official boxes distributed by Vagrant are created.
If you've never used a post-processor before, please read the documentation on
[using post-processors](/docs/templates/post-processors.html) in templates. This
knowledge will be expected for the remainder of this document.
[using post-processors](/docs/templates/post-processors.html) in templates.
This knowledge will be expected for the remainder of this document.
Because Vagrant boxes are
[provider-specific](https://docs.vagrantup.com/v2/boxes/format.html), the Vagrant
post-processor is hardcoded to understand how to convert the artifacts of
certain builders into proper boxes for their respective providers.
[provider-specific](https://docs.vagrantup.com/v2/boxes/format.html), the
Vagrant post-processor is hardcoded to understand how to convert the artifacts
of certain builders into proper boxes for their respective providers.
Currently, the Vagrant post-processor can create boxes for the following
providers.
@ -61,20 +61,21 @@ more details about certain options in following sections.
with 0 being no compression and 9 being the best compression. By default,
compression is enabled at level 6.
- `include` (array of strings) - Paths to files to include in the Vagrant box.
These files will each be copied into the top level directory of the Vagrant
box (regardless of their paths). They can then be used from the Vagrantfile.
- `include` (array of strings) - Paths to files to include in the Vagrant
box. These files will each be copied into the top level directory of the
Vagrant box (regardless of their paths). They can then be used from the
Vagrantfile.
- `keep_input_artifact` (boolean) - If set to true, do not delete the
`output_directory` on a successful build. Defaults to false.
- `output` (string) - The full path to the box file that will be created by
this post-processor. This is a [configuration
template](/docs/templates/engine.html). The variable
`Provider` is replaced by the Vagrant provider the box is for. The variable
`ArtifactId` is replaced by the ID of the input artifact. The variable
`BuildName` is replaced with the name of the build. By default, the value of
this config is `packer_{{.BuildName}}_{{.Provider}}.box`.
template](/docs/templates/engine.html). The variable `Provider` is replaced
by the Vagrant provider the box is for. The variable `ArtifactId` is
replaced by the ID of the input artifact. The variable `BuildName` is
replaced with the name of the build. By default, the value of this config
is `packer_{{.BuildName}}_{{.Provider}}.box`.
- `vagrantfile_template` (string) - Path to a template to use for the
Vagrantfile that is packaged with the box.
@ -106,25 +107,25 @@ where it will be set to 0.
The available provider names are:
- `aws`
- `azure`
- `digitalocean`
- `google`
- `hyperv`
- `parallels`
- `libvirt`
- `lxc`
- `scaleway`
- `virtualbox`
- `vmware`
- `docker`
- `aws`
- `azure`
- `digitalocean`
- `google`
- `hyperv`
- `parallels`
- `libvirt`
- `lxc`
- `scaleway`
- `virtualbox`
- `vmware`
- `docker`
## Input Artifacts
By default, Packer will delete the original input artifact, assuming you only
want the final Vagrant box as the result. If you wish to keep the input artifact
(the raw virtual machine, for example), then you must configure Packer to keep
it.
want the final Vagrant box as the result. If you wish to keep the input
artifact (the raw virtual machine, for example), then you must configure Packer
to keep it.
Please see the [documentation on input
artifacts](/docs/templates/post-processors.html#toc_2) for more information.
@ -137,7 +138,7 @@ sha256 hash will be used.
The following Docker input artifacts are supported:
- `docker` builder with `commit: true`, always uses the sha256 hash
- `docker-import`
- `docker-tag`
- `docker-push`
- `docker` builder with `commit: true`, always uses the sha256 hash
- `docker-import`
- `docker-tag`
- `docker-push`

View File

@ -1,7 +1,9 @@
---
description: |
The Packer vSphere Template post-processor takes an artifact from the VMware-iso builder, built on ESXi (i.e. remote)
or an artifact from the vSphere post-processor and allows to mark a VM as a template and leaving it in a path of choice.
The Packer vSphere Template post-processor takes an artifact from the
VMware-iso builder, built on ESXi (i.e. remote) or an artifact from the vSphere
post-processor and allows to mark a VM as a template and leaving it in a path
of choice.
layout: docs
page_title: 'vSphere Template - Post-Processors'
sidebar_current: 'docs-post-processors-vSphere-template'
@ -11,9 +13,10 @@ sidebar_current: 'docs-post-processors-vSphere-template'
Type: `vsphere-template`
The Packer vSphere Template post-processor takes an artifact from the VMware-iso builder, built on ESXi (i.e. remote)
or an artifact from the [vSphere](/docs/post-processors/vsphere.html) post-processor and allows to mark a VM as a
template and leaving it in a path of choice.
The Packer vSphere Template post-processor takes an artifact from the
VMware-iso builder, built on ESXi (i.e. remote) or an artifact from the
[vSphere](/docs/post-processors/vsphere.html) post-processor and allows to mark
a VM as a template and leaving it in a path of choice.
## Example
@ -39,26 +42,33 @@ each category, the available configuration keys are alphabetized.
Required:
- `host` (string) - The vSphere host that contains the VM built by the vmware-iso.
- `host` (string) - The vSphere host that contains the VM built by the
vmware-iso.
- `password` (string) - Password to use to authenticate to the vSphere endpoint.
- `password` (string) - Password to use to authenticate to the vSphere
endpoint.
- `username` (string) - The username to use to authenticate to the vSphere endpoint.
- `username` (string) - The username to use to authenticate to the vSphere
endpoint.
Optional:
- `datacenter` (string) - If you have more than one, you will need to specify which one the ESXi used.
- `datacenter` (string) - If you have more than one, you will need to specify
which one the ESXi used.
- `folder` (string) - Target path where the template will be created.
- `insecure` (boolean) - If it's true skip verification of server certificate. Default is false
- `insecure` (boolean) - If it's true skip verification of server
certificate. Default is false
## Using the vSphere Template with local builders
Once the [vSphere](/docs/post-processors/vsphere.html) takes an artifact from the VMware builder and uploads it
to a vSphere endpoint, you will likely want to mark that VM as template. Packer can do this for you automatically
using a sequence definition (a collection of post-processors that are treated as as single pipeline, see
[Post-Processors](/docs/templates/post-processors.html) for more information):
Once the [vSphere](/docs/post-processors/vsphere.html) takes an artifact from
the VMware builder and uploads it to a vSphere endpoint, you will likely want
to mark that VM as template. Packer can do this for you automatically using a
sequence definition (a collection of post-processors that are treated as as
single pipeline, see [Post-Processors](/docs/templates/post-processors.html)
for more information):
``` json
{
@ -77,6 +87,8 @@ using a sequence definition (a collection of post-processors that are treated as
}
```
In the example above, the result of each builder is passed through the defined sequence of post-processors starting
with the `vsphere` post-processor which will upload the artifact to a vSphere endpoint. The resulting artifact is then
passed on to the `vsphere-template` post-processor which handles marking a VM as a template.
In the example above, the result of each builder is passed through the defined
sequence of post-processors starting with the `vsphere` post-processor which
will upload the artifact to a vSphere endpoint. The resulting artifact is then
passed on to the `vsphere-template` post-processor which handles marking a VM
as a template.

View File

@ -1,7 +1,7 @@
---
description: |
The Packer vSphere post-processor takes an artifact from the VMware builder
and uploads it to a vSphere endpoint.
The Packer vSphere post-processor takes an artifact from the VMware builder and
uploads it to a vSphere endpoint.
layout: docs
page_title: 'vSphere - Post-Processors'
sidebar_current: 'docs-post-processors-vsphere'
@ -24,29 +24,29 @@ Required:
- `cluster` (string) - The cluster to upload the VM to.
- `datacenter` (string) - The name of the datacenter within vSphere to add the
VM to.
- `datacenter` (string) - The name of the datacenter within vSphere to add
the VM to.
- `datastore` (string) - The name of the datastore to store this VM. This is
*not required* if `resource_pool` is specified.
- `host` (string) - The vSphere host that will be contacted to perform the
VM upload.
- `host` (string) - The vSphere host that will be contacted to perform the VM
upload.
- `password` (string) - Password to use to authenticate to the
vSphere endpoint.
- `password` (string) - Password to use to authenticate to the vSphere
endpoint.
- `username` (string) - The username to use to authenticate to the
vSphere endpoint.
- `username` (string) - The username to use to authenticate to the vSphere
endpoint.
- `vm_name` (string) - The name of the VM once it is uploaded.
Optional:
- `esxi_host` (string) - Target vSphere host. Used to assign specific esx host
to upload the resulting VM to, when a vCenter Server is used as `host`. Can be
either a hostname (e.g. "packer-esxi1", requires proper DNS setup and/or correct
DNS search domain setting) or an ipv4 address.
- `esxi_host` (string) - Target vSphere host. Used to assign specific esx
host to upload the resulting VM to, when a vCenter Server is used as
`host`. Can be either a hostname (e.g. "packer-esxi1", requires proper DNS
setup and/or correct DNS search domain setting) or an ipv4 address.
- `disk_mode` (string) - Target disk format. See `ovftool` manual for
available options. By default, "thick" will be used.
@ -58,10 +58,11 @@ Optional:
- `vm_folder` (string) - The folder within the datastore to store the VM.
- `vm_network` (string) - The name of the VM network this VM will be
added to.
- `vm_network` (string) - The name of the VM network this VM will be added
to.
- `overwrite` (boolean) - If it's true force the system to overwrite the
existing files instead create new ones. Default is false
- `options` (array of strings) - Custom options to add in ovftool. See `ovftool --help` to list all the options
- `options` (array of strings) - Custom options to add in ovftool. See
`ovftool --help` to list all the options

View File

@ -1,10 +1,10 @@
---
description: |
The ansible-local Packer provisioner will run ansible in ansible's "local"
mode on the remote/guest VM using Playbook and Role files that exist on the
guest VM. This means ansible must be installed on the remote/guest VM.
Playbooks and Roles can be uploaded from your build machine
(the one running Packer) to the vm.
The ansible-local Packer provisioner will run ansible in ansible's "local" mode
on the remote/guest VM using Playbook and Role files that exist on the guest
VM. This means ansible must be installed on the remote/guest VM. Playbooks and
Roles can be uploaded from your build machine (the one running Packer) to the
vm.
layout: docs
page_title: 'Ansible Local - Provisioners'
sidebar_current: 'docs-provisioners-ansible-local'
@ -15,18 +15,18 @@ sidebar_current: 'docs-provisioners-ansible-local'
Type: `ansible-local`
The `ansible-local` Packer provisioner will run ansible in ansible's "local"
mode on the remote/guest VM using Playbook and Role files that exist on the
guest VM. This means ansible must be installed on the remote/guest VM.
Playbooks and Roles can be uploaded from your build machine
(the one running Packer) to the vm. Ansible is then run on the guest machine
in [local mode](https://docs.ansible.com/ansible/playbooks_delegation.html#local-playbooks) via the
`ansible-playbook` command.
mode on the remote/guest VM using Playbook and Role files that exist on the
guest VM. This means ansible must be installed on the remote/guest VM.
Playbooks and Roles can be uploaded from your build machine (the one running
Packer) to the vm. Ansible is then run on the guest machine in [local
mode](https://docs.ansible.com/ansible/playbooks_delegation.html#local-playbooks)
via the `ansible-playbook` command.
-&gt; **Note:** Ansible will *not* be installed automatically by this
provisioner. This provisioner expects that Ansible is already installed on the
guest/remote machine. It is common practice to use the [shell
provisioner](/docs/provisioners/shell.html) before the Ansible provisioner to do
this.
provisioner](/docs/provisioners/shell.html) before the Ansible provisioner to
do this.
## Basic Example
@ -45,26 +45,27 @@ The reference of available configuration options is listed below.
Required:
- `playbook_file` (string) - The playbook file to be executed by ansible. This
file must exist on your local system and will be uploaded to the
- `playbook_file` (string) - The playbook file to be executed by ansible.
This file must exist on your local system and will be uploaded to the
remote machine. This option is exclusive with `playbook_files`.
- `playbook_files` (array of strings) - The playbook files to be executed by ansible.
These files must exist on your local system. If the files don't exist in the `playbook_dir`
or you don't set `playbook_dir` they will be uploaded to the remote machine. This option
is exclusive with `playbook_file`.
- `playbook_files` (array of strings) - The playbook files to be executed by
ansible. These files must exist on your local system. If the files don't
exist in the `playbook_dir` or you don't set `playbook_dir` they will be
uploaded to the remote machine. This option is exclusive with
`playbook_file`.
Optional:
- `command` (string) - The command to invoke ansible. Defaults
to "ANSIBLE\_FORCE\_COLOR=1 PYTHONUNBUFFERED=1 ansible-playbook".
Note, This disregards the value of `-color` when passed to `packer build`.
To disable colors, set this to `PYTHONUNBUFFERED=1 ansible-playbook`.
- `command` (string) - The command to invoke ansible. Defaults to
"ANSIBLE\_FORCE\_COLOR=1 PYTHONUNBUFFERED=1 ansible-playbook". Note, This
disregards the value of `-color` when passed to `packer build`. To disable
colors, set this to `PYTHONUNBUFFERED=1 ansible-playbook`.
- `extra_arguments` (array of strings) - An array of extra arguments to pass
to the ansible command. By default, this is empty. These arguments *will*
be passed through a shell and arguments should be quoted accordingly.
Usage example:
be passed through a shell and arguments should be quoted accordingly. Usage
example:
<!-- -->
"extra_arguments": [ "--extra-vars \"Region={{user `Region`}} Stage={{user `Stage`}}\"" ]
@ -81,8 +82,8 @@ Optional:
```
- `inventory_file` (string) - The inventory file to be used by ansible. This
file must exist on your local system and will be uploaded to the
remote machine.
file must exist on your local system and will be uploaded to the remote
machine.
When using an inventory file, it's also required to `--limit` the hosts to the
specified host you're building. The `--limit` argument can be provided in the
@ -110,21 +111,21 @@ chi-dbservers
chi-appservers
```
- `playbook_dir` (string) - a path to the complete ansible directory structure
on your local system to be copied to the remote machine as the
- `playbook_dir` (string) - a path to the complete ansible directory
structure on your local system to be copied to the remote machine as the
`staging_directory` before all other files and directories.
- `playbook_paths` (array of strings) - An array of directories of playbook files on
your local system. These will be uploaded to the remote machine under
`staging_directory`/playbooks. By default, this is empty.
- `playbook_paths` (array of strings) - An array of directories of playbook
files on your local system. These will be uploaded to the remote machine
under `staging_directory`/playbooks. By default, this is empty.
- `galaxy_file` (string) - A requirements file which provides a way to install
roles with the [ansible-galaxy
- `galaxy_file` (string) - A requirements file which provides a way to
install roles with the [ansible-galaxy
cli](http://docs.ansible.com/ansible/galaxy.html#the-ansible-galaxy-command-line-tool)
on the remote machine. By default, this is empty.
- `galaxycommand` (string) - The command to invoke ansible-galaxy.
By default, this is ansible-galaxy.
- `galaxycommand` (string) - The command to invoke ansible-galaxy. By
default, this is ansible-galaxy.
- `group_vars` (string) - a path to the directory containing ansible group
variables on your local system to be copied to the remote machine. By
@ -149,9 +150,9 @@ chi-appservers
are not correct, use a shell provisioner prior to this to configure it
properly.
- `clean_staging_directory` (boolean) - If set to `true`, the content of
the `staging_directory` will be removed after executing ansible. By
default, this is set to `false`.
- `clean_staging_directory` (boolean) - If set to `true`, the content of the
`staging_directory` will be removed after executing ansible. By default,
this is set to `false`.
## Default Extra Variables
@ -163,9 +164,10 @@ commonly useful Ansible variables:
This is most useful when Packer is making multiple builds and you want to
distinguish them slightly when using a common playbook.
- `packer_builder_type` is the type of the builder that was used to create the
machine that the script is running on. This is useful if you want to run
only certain parts of the playbook on systems built with certain builders.
- `packer_builder_type` is the type of the builder that was used to create
the machine that the script is running on. This is useful if you want to
run only certain parts of the playbook on systems built with certain
builders.
- `packer_http_addr` If using a builder that provides an http server for file
transfer (such as hyperv, parallels, qemu, virtualbox, and vmware), this

View File

@ -1,7 +1,7 @@
---
description: |
The ansible Packer provisioner allows Ansible playbooks to be run to
provision the machine.
The ansible Packer provisioner allows Ansible playbooks to be run to provision
the machine.
layout: docs
page_title: 'Ansible - Provisioners'
sidebar_current: 'docs-provisioners-ansible-remote'
@ -16,8 +16,9 @@ an Ansible inventory file configured to use SSH, runs an SSH server, executes
`ansible-playbook`, and marshals Ansible plays through the SSH server to the
machine being provisioned by Packer.
-&gt; **Note:**: Any `remote_user` defined in tasks will be ignored. Packer will
always connect with the user given in the json config for this provisioner.
-&gt; **Note:**: Any `remote_user` defined in tasks will be ignored. Packer
will always connect with the user given in the json config for this
provisioner.
## Basic Example
@ -53,26 +54,25 @@ Required Parameters:
Optional Parameters:
- `ansible_env_vars` (array of strings) - Environment variables to set before
running Ansible.
Usage example:
running Ansible. Usage example:
``` json
{
"ansible_env_vars": [ "ANSIBLE_HOST_KEY_CHECKING=False", "ANSIBLE_SSH_ARGS='-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s'", "ANSIBLE_NOCOLOR=True" ]
}
```
If you are running a Windows build on AWS, Azure or Google Compute and would
like to access the auto-generated password that Packer uses to connect to a
Windows instance via WinRM, you can use the template variable
{{.WinRMPassword}} in this option.
For example:
```json
If you are running a Windows build on AWS, Azure or Google Compute and
would like to access the auto-generated password that Packer uses to
connect to a Windows instance via WinRM, you can use the template variable
{{.WinRMPassword}} in this option. For example:
``` json
"ansible_env_vars": [ "WINRM_PASSWORD={{.WinRMPassword}}" ],
```
- `command` (string) - The command to invoke ansible.
Defaults to `ansible-playbook`.
- `command` (string) - The command to invoke ansible. Defaults to
`ansible-playbook`.
- `empty_groups` (array of strings) - The groups which should be present in
inventory file but remain empty.
@ -81,43 +81,41 @@ Optional Parameters:
These arguments *will not* be passed through a shell and arguments should
not be quoted. Usage example:
```json
``` json
{
"extra_arguments": [ "--extra-vars", "Region={{user `Region`}} Stage={{user `Stage`}}" ]
}
```
If you are running a Windows build on AWS, Azure or Google Compute and would
like to access the auto-generated password that Packer uses to connect to a
Windows instance via WinRM, you can use the template variable
{{.WinRMPassword}} in this option.
For example:
If you are running a Windows build on AWS, Azure or Google Compute and
would like to access the auto-generated password that Packer uses to
connect to a Windows instance via WinRM, you can use the template variable
{{.WinRMPassword}} in this option. For example:
```json
``` json
"extra_arguments": [
"--extra-vars", "winrm_password={{ .WinRMPassword }}"
]
```
- `groups` (array of strings) - The groups into which the Ansible host
should be placed. When unspecified, the host is not associated with any
groups.
- `groups` (array of strings) - The groups into which the Ansible host should
be placed. When unspecified, the host is not associated with any groups.
- `inventory_file` (string) - The inventory file to use during provisioning.
When unspecified, Packer will create a temporary inventory file and will
use the `host_alias`.
- `host_alias` (string) - The alias by which the Ansible host should be known.
Defaults to `default`. This setting is ignored when using a custom inventory
file.
- `host_alias` (string) - The alias by which the Ansible host should be
known. Defaults to `default`. This setting is ignored when using a custom
inventory file.
- `inventory_directory` (string) - The directory in which to place the
temporary generated Ansible inventory file. By default, this is the
system-specific temporary file location. The fully-qualified name of this
temporary file will be passed to the `-i` argument of the `ansible` command
when this provisioner runs ansible. Specify this if you have an existing
inventory directory with `host_vars` `group_vars` that you would like to use
in the playbook that this provisioner will run.
inventory directory with `host_vars` `group_vars` that you would like to
use in the playbook that this provisioner will run.
- `local_port` (string) - The port on which to attempt to listen for SSH
connections. This value is a starting point. The provisioner will attempt
@ -125,20 +123,20 @@ Optional Parameters:
`local_port`. A system-chosen port is used when `local_port` is missing or
empty.
- `sftp_command` (string) - The command to run on the machine being provisioned
by Packer to handle the SFTP protocol that Ansible will use to transfer
files. The command should read and write on stdin and stdout, respectively.
Defaults to `/usr/lib/sftp-server -e`.
- `sftp_command` (string) - The command to run on the machine being
provisioned by Packer to handle the SFTP protocol that Ansible will use to
transfer files. The command should read and write on stdin and stdout,
respectively. Defaults to `/usr/lib/sftp-server -e`.
- `skip_version_check` (boolean) - Check if ansible is installed prior to running.
Set this to `true`, for example, if you're going to install ansible during
the packer run.
- `skip_version_check` (boolean) - Check if ansible is installed prior to
running. Set this to `true`, for example, if you're going to install
ansible during the packer run.
- `ssh_host_key_file` (string) - The SSH key that will be used to run the SSH
server on the host machine to forward commands to the target machine. Ansible
connects to this server and will validate the identity of the server using
the system known\_hosts. The default behavior is to generate and use a
onetime key. Host key checking is disabled via the
server on the host machine to forward commands to the target machine.
Ansible connects to this server and will validate the identity of the
server using the system known\_hosts. The default behavior is to generate
and use a onetime key. Host key checking is disabled via the
`ANSIBLE_HOST_KEY_CHECKING` environment variable if the key is generated.
- `ssh_authorized_key_file` (string) - The SSH public key of the Ansible
@ -159,9 +157,10 @@ commonly useful Ansible variables:
This is most useful when Packer is making multiple builds and you want to
distinguish them slightly when using a common playbook.
- `packer_builder_type` is the type of the builder that was used to create the
machine that the script is running on. This is useful if you want to run
only certain parts of the playbook on systems built with certain builders.
- `packer_builder_type` is the type of the builder that was used to create
the machine that the script is running on. This is useful if you want to
run only certain parts of the playbook on systems built with certain
builders.
- `packer_http_addr` If using a builder that provides an http server for file
transfer (such as hyperv, parallels, qemu, virtualbox, and vmware), this
@ -172,9 +171,10 @@ commonly useful Ansible variables:
## Debugging
To debug underlying issues with Ansible, add `"-vvvv"` to `"extra_arguments"` to enable verbose logging.
To debug underlying issues with Ansible, add `"-vvvv"` to `"extra_arguments"`
to enable verbose logging.
```json
``` json
{
"extra_arguments": [ "-vvvv" ]
}
@ -184,7 +184,8 @@ To debug underlying issues with Ansible, add `"-vvvv"` to `"extra_arguments"` to
### Redhat / CentOS
Redhat / CentOS builds have been known to fail with the following error due to `sftp_command`, which should be set to `/usr/libexec/openssh/sftp-server -e`:
Redhat / CentOS builds have been known to fail with the following error due to
`sftp_command`, which should be set to `/usr/libexec/openssh/sftp-server -e`:
``` text
==> virtualbox-ovf: starting sftp subsystem
@ -193,9 +194,10 @@ Redhat / CentOS builds have been known to fail with the following error due to `
### chroot communicator
Building within a chroot (e.g. `amazon-chroot`) requires changing the Ansible connection to chroot.
Building within a chroot (e.g. `amazon-chroot`) requires changing the Ansible
connection to chroot.
```json
``` json
{
"builders": [
{
@ -220,8 +222,11 @@ Building within a chroot (e.g. `amazon-chroot`) requires changing the Ansible co
### winrm communicator
Windows builds require a custom Ansible connection plugin and a particular configuration. Assuming a directory named `connection_plugins` is next to the playbook and contains a file named `packer.py` which implements
the connection plugin. On versions of Ansible before 2.4.x, the following works as the connection plugin
Windows builds require a custom Ansible connection plugin and a particular
configuration. Assuming a directory named `connection_plugins` is next to the
playbook and contains a file named `packer.py` which implements the connection
plugin. On versions of Ansible before 2.4.x, the following works as the
connection plugin
``` python
from __future__ import (absolute_import, division, print_function)
@ -242,12 +247,16 @@ class Connection(SSHConnection):
super(Connection, self).__init__(*args, **kwargs)
```
Newer versions of Ansible require all plugins to have a documentation string. You can see if there is a
plugin available for the version of Ansible you are using [here](https://github.com/hashicorp/packer/tree/master/examples/ansible/connection-plugin).
Newer versions of Ansible require all plugins to have a documentation string.
You can see if there is a plugin available for the version of Ansible you are
using
[here](https://github.com/hashicorp/packer/tree/master/examples/ansible/connection-plugin).
To create the plugin yourself, you will need to copy all of the `options` from the `DOCUMENTATION` string
from the [ssh.py Ansible connection plugin](https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/connection/ssh.py)
of the Ansible version you are using and add it to a packer.py file similar to as follows
To create the plugin yourself, you will need to copy all of the `options` from
the `DOCUMENTATION` string from the [ssh.py Ansible connection
plugin](https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/connection/ssh.py)
of the Ansible version you are using and add it to a packer.py file similar to
as follows
``` python
from __future__ import (absolute_import, division, print_function)
@ -282,7 +291,8 @@ class Connection(SSHConnection):
super(Connection, self).__init__(*args, **kwargs)
```
This template should build a Windows Server 2012 image on Google Cloud Platform:
This template should build a Windows Server 2012 image on Google Cloud
Platform:
``` json
{
@ -318,22 +328,31 @@ This template should build a Windows Server 2012 image on Google Cloud Platform:
```
### Post i/o timeout errors
If you see `unknown error: Post http://<ip>:<port>/wsman:dial tcp <ip>:<port>: i/o timeout` errors while provisioning a Windows machine, try setting Ansible to copy files over [ssh instead of sftp](https://docs.ansible.com/ansible/latest/reference_appendices/config.html#envvar-ANSIBLE_SCP_IF_SSH).
If you see
`unknown error: Post http://<ip>:<port>/wsman:dial tcp <ip>:<port>: i/o timeout`
errors while provisioning a Windows machine, try setting Ansible to copy files
over [ssh instead of
sftp](https://docs.ansible.com/ansible/latest/reference_appendices/config.html#envvar-ANSIBLE_SCP_IF_SSH).
### Too many SSH keys
SSH servers only allow you to attempt to authenticate a certain number of times. All of your loaded keys will be tried before the dynamically generated key. If you have too many SSH keys loaded in your `ssh-agent`, the Ansible provisioner may fail authentication with a message similar to this:
SSH servers only allow you to attempt to authenticate a certain number of
times. All of your loaded keys will be tried before the dynamically generated
key. If you have too many SSH keys loaded in your `ssh-agent`, the Ansible
provisioner may fail authentication with a message similar to this:
```console
``` console
googlecompute: fatal: [default]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '[127.0.0.1]:62684' (RSA) to the list of known hosts.\r\nReceived disconnect from 127.0.0.1 port 62684:2: too many authentication failures\r\nAuthentication failed.\r\n", "unreachable": true}
```
To unload all keys from your `ssh-agent`, run:
```console
``` console
$ ssh-add -D
```
### Become: yes
We recommend against running Packer as root; if you do then you won't be able to successfully run your ansible playbook as root; `become: yes` will fail.
We recommend against running Packer as root; if you do then you won't be able
to successfully run your ansible playbook as root; `become: yes` will fail.

View File

@ -1,8 +1,8 @@
---
description: |
The chef-client Packer provisioner installs and configures software on
machines built by Packer using chef-client. Packer configures a Chef client to
talk to a remote Chef Server to provision the machine.
The chef-client Packer provisioner installs and configures software on machines
built by Packer using chef-client. Packer configures a Chef client to talk to a
remote Chef Server to provision the machine.
layout: docs
page_title: 'Chef Client - Provisioners'
sidebar_current: 'docs-provisioners-chef-client'
@ -14,8 +14,8 @@ Type: `chef-client`
The Chef Client Packer provisioner installs and configures software on machines
built by Packer using [chef-client](https://docs.chef.io/chef_client.html).
Packer configures a Chef client to talk to a remote Chef Server to provision the
machine.
Packer configures a Chef client to talk to a remote Chef Server to provision
the machine.
The provisioner will even install Chef onto your machine if it isn't already
installed, using the official Chef installers provided by Chef.
@ -52,38 +52,36 @@ configuration is actually required.
Configuration" section below for more details.
- `encrypted_data_bag_secret_path` (string) - The path to the file containing
the secret for encrypted data bags. By default, this is empty, so no
secret will be available.
the secret for encrypted data bags. By default, this is empty, so no secret
will be available.
- `execute_command` (string) - The command used to execute Chef. This has
various [configuration template
variables](/docs/templates/engine.html) available. See
below for more information.
various [configuration template variables](/docs/templates/engine.html)
available. See below for more information.
- `guest_os_type` (string) - The target guest OS type, either "unix" or
"windows". Setting this to "windows" will cause the provisioner to use
Windows friendly paths and commands. By default, this is "unix".
- `install_command` (string) - The command used to install Chef. This has
various [configuration template
variables](/docs/templates/engine.html) available. See
below for more information.
various [configuration template variables](/docs/templates/engine.html)
available. See below for more information.
- `json` (object) - An arbitrary mapping of JSON that will be available as
node attributes while running Chef.
- `knife_command` (string) - The command used to run Knife during node clean-up. This has
various [configuration template
variables](/docs/templates/engine.html) available. See
below for more information.
- `knife_command` (string) - The command used to run Knife during node
clean-up. This has various [configuration template
variables](/docs/templates/engine.html) available. See below for more
information.
- `node_name` (string) - The name of the node to register with the
Chef Server. This is optional and by default is packer-{{uuid}}.
- `node_name` (string) - The name of the node to register with the Chef
Server. This is optional and by default is packer-{{uuid}}.
* `policy_group` (string) - The name of a policy group that exists on the
- `policy_group` (string) - The name of a policy group that exists on the
Chef server. `policy_name` must also be specified.
* `policy_name` (string) - The name of a policy, as identified by the name
- `policy_name` (string) - The name of a policy, as identified by the name
setting in a `Policyfile.rb` file. `policy_group` must also be specified.
- `prevent_sudo` (boolean) - By default, the configured commands that are
@ -93,31 +91,32 @@ configuration is actually required.
- `run_list` (array of strings) - The [run
list](http://docs.chef.io/essentials_node_object_run_lists.html) for Chef.
By default this is empty, and will use the run list sent down by the
Chef Server.
By default this is empty, and will use the run list sent down by the Chef
Server.
- `server_url` (string) - The URL to the Chef server. This is required.
- `skip_clean_client` (boolean) - If true, Packer won't remove the client from
- `skip_clean_client` (boolean) - If true, Packer won't remove the client
from the Chef server after it is done running. By default, this is false.
- `skip_clean_node` (boolean) - If true, Packer won't remove the node from
the Chef server after it is done running. By default, this is false.
- `skip_clean_node` (boolean) - If true, Packer won't remove the node from the
Chef server after it is done running. By default, this is false.
- `skip_clean_staging_directory` (boolean) - If true, Packer won't remove the
Chef staging directory from the machine after it is done running. By
default, this is false.
- `skip_clean_staging_directory` (boolean) - If true, Packer won't remove the Chef staging
directory from the machine after it is done running. By default, this is false.
- `skip_install` (boolean) - If true, Chef will not automatically be installed
on the machine using the Chef omnibus installers.
- `skip_install` (boolean) - If true, Chef will not automatically be
installed on the machine using the Chef omnibus installers.
- `ssl_verify_mode` (string) - Set to "verify\_none" to skip validation of
SSL certificates. If not set, this defaults to "verify\_peer" which validates
all SSL certifications.
SSL certificates. If not set, this defaults to "verify\_peer" which
validates all SSL certifications.
- `trusted_certs_dir` (string) - This is a directory that contains additional
SSL certificates to trust. Any certificates in this directory will be added to
whatever CA bundle ruby is using. Use this to add self-signed certs for your
Chef Server or local HTTP file servers.
- `trusted_certs_dir` (string) - This is a directory that contains additional
SSL certificates to trust. Any certificates in this directory will be added
to whatever CA bundle ruby is using. Use this to add self-signed certs for
your Chef Server or local HTTP file servers.
- `staging_directory` (string) - This is the directory where all the
configuration of Chef by Packer will be placed. By default this is
@ -135,10 +134,9 @@ configuration is actually required.
will be used.
- `validation_key_path` (string) - Path to the validation key for
communicating with the Chef Server. This will be uploaded to the
remote machine. If this is NOT set, then it is your responsibility via other
means (shell provisioner, etc.) to get a validation key to where Chef
expects it.
communicating with the Chef Server. This will be uploaded to the remote
machine. If this is NOT set, then it is your responsibility via other means
(shell provisioner, etc.) to get a validation key to where Chef expects it.
## Chef Configuration
@ -183,9 +181,8 @@ trusted_certs_dir :{{.TrustedCertsDir}}
{{end}}
```
This template is a [configuration
template](/docs/templates/engine.html) and has a set of
variables available to use:
This template is a [configuration template](/docs/templates/engine.html) and
has a set of variables available to use:
- `ChefEnvironment` - The Chef environment name.
- `EncryptedDataBagSecretPath` - The path to the secret key file to decrypt
@ -220,14 +217,14 @@ c:/opscode/chef/bin/chef-client.bat \
-j {{.JsonPath}}
```
This command can be customized using the `execute_command` configuration. As you
can see from the default value above, the value of this configuration can
This command can be customized using the `execute_command` configuration. As
you can see from the default value above, the value of this configuration can
contain various template variables, defined below:
- `ConfigPath` - The path to the Chef configuration file.
- `JsonPath` - The path to the JSON attributes file for the node.
- `Sudo` - A boolean of whether to `sudo` the command or not, depending on the
value of the `prevent_sudo` configuration.
- `Sudo` - A boolean of whether to `sudo` the command or not, depending on
the value of the `prevent_sudo` configuration.
## Install Command
@ -274,17 +271,18 @@ This command can be customized using the `knife_command` configuration. As you
can see from the default value above, the value of this configuration can
contain various template variables, defined below:
- `Args` - The command arguments that are getting passed to the Knife command.
- `Args` - The command arguments that are getting passed to the Knife
command.
- `Flags` - The command flags that are getting passed to the Knife command..
- `Sudo` - A boolean of whether to `sudo` the command or not, depending on the
value of the `prevent_sudo` configuration.
- `Sudo` - A boolean of whether to `sudo` the command or not, depending on
the value of the `prevent_sudo` configuration.
## Folder Permissions
!&gt; The `chef-client` provisioner will chmod the directory with your Chef keys
to 777. This is to ensure that Packer can upload and make use of that directory.
However, once the machine is created, you usually don't want to keep these
directories with those permissions. To change the permissions on the
!&gt; The `chef-client` provisioner will chmod the directory with your Chef
keys to 777. This is to ensure that Packer can upload and make use of that
directory. However, once the machine is created, you usually don't want to keep
these directories with those permissions. To change the permissions on the
directories, append a shell provisioner after Chef to modify them.
## Examples
@ -296,8 +294,8 @@ mode.
**Packer variables**
Set the necessary Packer variables using environment variables or provide a [var
file](/docs/templates/user-variables.html).
Set the necessary Packer variables using environment variables or provide a
[var file](/docs/templates/user-variables.html).
``` json
"variables": {
@ -335,7 +333,7 @@ cookbooks using Berkshelf or some other means.
And ./config/client.rb.template referenced by the above configuration:
```ruby
``` ruby
log_level :info
log_location STDOUT
local_mode true
@ -361,8 +359,8 @@ mode, while passing a `run_list` using a variable.
**Packer variables**
Set the necessary Packer variables using environment variables or provide a [var
file](/docs/templates/user-variables.html).
Set the necessary Packer variables using environment variables or provide a
[var file](/docs/templates/user-variables.html).
``` json
"variables": {

View File

@ -37,8 +37,8 @@ directory relative to your working directory.
The reference of available configuration options is listed below. No
configuration is actually required, but at least `run_list` is recommended.
- `chef_environment` (string) - The name of the `chef_environment` sent to the
Chef server. By default this is empty and will not use an environment
- `chef_environment` (string) - The name of the `chef_environment` sent to
the Chef server. By default this is empty and will not use an environment
- `config_template` (string) - Path to a template that will be used for the
Chef configuration file. By default Packer only sets configuration it needs
@ -49,8 +49,8 @@ configuration is actually required, but at least `run_list` is recommended.
- `cookbook_paths` (array of strings) - This is an array of paths to
"cookbooks" directories on your local filesystem. These will be uploaded to
the remote machine in the directory specified by the `staging_directory`. By
default, this is empty.
the remote machine in the directory specified by the `staging_directory`.
By default, this is empty.
- `data_bags_path` (string) - The path to the "data\_bags" directory on your
local filesystem. These will be uploaded to the remote machine in the
@ -65,18 +65,16 @@ configuration is actually required, but at least `run_list` is recommended.
directory specified by the `staging_directory`. By default, this is empty.
- `execute_command` (string) - The command used to execute Chef. This has
various [configuration template
variables](/docs/templates/engine.html) available. See
below for more information.
various [configuration template variables](/docs/templates/engine.html)
available. See below for more information.
- `guest_os_type` (string) - The target guest OS type, either "unix" or
"windows". Setting this to "windows" will cause the provisioner to use
Windows friendly paths and commands. By default, this is "unix".
- `install_command` (string) - The command used to install Chef. This has
various [configuration template
variables](/docs/templates/engine.html) available. See
below for more information.
various [configuration template variables](/docs/templates/engine.html)
available. See below for more information.
- `json` (object) - An arbitrary mapping of JSON that will be available as
node attributes while running Chef.
@ -91,16 +89,16 @@ configuration is actually required, but at least `run_list` is recommended.
provisioner or step. If specified, Chef will be configured to look for
cookbooks here. By default, this is empty.
- `roles_path` (string) - The path to the "roles" directory on your
local filesystem. These will be uploaded to the remote machine in the
directory specified by the `staging_directory`. By default, this is empty.
- `roles_path` (string) - The path to the "roles" directory on your local
filesystem. These will be uploaded to the remote machine in the directory
specified by the `staging_directory`. By default, this is empty.
- `run_list` (array of strings) - The [run
list](https://docs.chef.io/run_lists.html) for Chef. By default this
is empty.
list](https://docs.chef.io/run_lists.html) for Chef. By default this is
empty.
- `skip_install` (boolean) - If true, Chef will not automatically be installed
on the machine using the Chef omnibus installers.
- `skip_install` (boolean) - If true, Chef will not automatically be
installed on the machine using the Chef omnibus installers.
- `staging_directory` (string) - This is the directory where all the
configuration of Chef by Packer will be placed. By default this is
@ -110,8 +108,8 @@ configuration is actually required, but at least `run_list` is recommended.
able to create directories and write into this folder. If the permissions
are not correct, use a shell provisioner prior to this to configure it
properly.
- `version` (string) - The version of Chef to be installed. By default this is
empty which will install the latest version of Chef.
- `version` (string) - The version of Chef to be installed. By default this
is empty which will install the latest version of Chef.
## Chef Configuration
@ -126,14 +124,13 @@ The default value for the configuration template is:
cookbook_path [{{.CookbookPaths}}]
```
This template is a [configuration
template](/docs/templates/engine.html) and has a set of
variables available to use:
This template is a [configuration template](/docs/templates/engine.html) and
has a set of variables available to use:
- `ChefEnvironment` - The current enabled environment. Only non-empty if the
environment path is set.
- `CookbookPaths` is the set of cookbook paths ready to embedded directly into
a Ruby array to configure Chef.
- `CookbookPaths` is the set of cookbook paths ready to embedded directly
into a Ruby array to configure Chef.
- `DataBagsPath` is the path to the data bags folder.
- `EncryptedDataBagSecretPath` - The path to the encrypted data bag secret
- `EnvironmentsPath` - The path to the environments folder.
@ -162,14 +159,14 @@ c:/opscode/chef/bin/chef-solo.bat \
-j {{.JsonPath}}
```
This command can be customized using the `execute_command` configuration. As you
can see from the default value above, the value of this configuration can
This command can be customized using the `execute_command` configuration. As
you can see from the default value above, the value of this configuration can
contain various template variables, defined below:
- `ConfigPath` - The path to the Chef configuration file.
- `JsonPath` - The path to the JSON attributes file for the node.
- `Sudo` - A boolean of whether to `sudo` the command or not, depending on the
value of the `prevent_sudo` configuration.
- `Sudo` - A boolean of whether to `sudo` the command or not, depending on
the value of the `prevent_sudo` configuration.
## Install Command

View File

@ -1,7 +1,5 @@
---
description: |
The converge Packer provisioner uses Converge modules to provision the
machine.
description: 'The converge Packer provisioner uses Converge modules to provision the machine.'
layout: docs
page_title: 'Converge - Provisioners'
sidebar_current: 'docs-provisioners-converge'
@ -37,15 +35,17 @@ The example below is fully functional.
The reference of available configuration options is listed below. The only
required element is "module". Every other option is optional.
- `module` (string) - Path (or URL) to the root module that Converge will apply.
- `module` (string) - Path (or URL) to the root module that Converge will
apply.
Optional parameters:
- `bootstrap` (boolean, defaults to false) - Set to allow the provisioner to
download the latest Converge bootstrap script and the specified `version` of
Converge from the internet.
download the latest Converge bootstrap script and the specified `version`
of Converge from the internet.
- `version` (string) - Set to a [released Converge version](https://github.com/asteris-llc/converge/releases) for bootstrap.
- `version` (string) - Set to a [released Converge
version](https://github.com/asteris-llc/converge/releases) for bootstrap.
- `module_dirs` (array of directory specifications) - Module directories to
transfer to the remote host for execution. See below for the specification.
@ -53,18 +53,19 @@ Optional parameters:
- `working_directory` (string) - The directory that Converge will change to
before execution.
- `params` (maps of string to string) - parameters to pass into the root module.
- `params` (maps of string to string) - parameters to pass into the root
module.
- `execute_command` (string) - the command used to execute Converge. This has
various
[configuration template variables](/docs/templates/engine.html) available.
various [configuration template variables](/docs/templates/engine.html)
available.
- `prevent_sudo` (boolean) - stop Converge from running with administrator
privileges via sudo
- `bootstrap_command` (string) - the command used to bootstrap Converge. This
has various
[configuration template variables](/docs/templates/engine.html) available.
has various [configuration template variables](/docs/templates/engine.html)
available.
- `prevent_bootstrap_sudo` (boolean) - stop Converge from bootstrapping with
administrator privileges via sudo
@ -77,14 +78,16 @@ directory.
- `source` (string) - the path to the folder on the local machine.
- `destination` (string) - the path to the folder on the remote machine. Parent
directories will not be created; use the shell module to do this.
- `destination` (string) - the path to the folder on the remote machine.
Parent directories will not be created; use the shell module to do this.
- `exclude` (array of string) - files and directories to exclude from transfer.
- `exclude` (array of string) - files and directories to exclude from
transfer.
### Execute Command
By default, Packer uses the following command (broken across multiple lines for readability) to execute Converge:
By default, Packer uses the following command (broken across multiple lines for
readability) to execute Converge:
``` liquid
cd {{.WorkingDirectory}} && \
@ -95,13 +98,14 @@ cd {{.WorkingDirectory}} && \
{{.Module}}
```
This command can be customized using the `execute_command` configuration. As you
can see from the default value above, the value of this configuration can
This command can be customized using the `execute_command` configuration. As
you can see from the default value above, the value of this configuration can
contain various template variables:
- `WorkingDirectory` - `directory` from the configuration.
- `Sudo` - the opposite of `prevent_sudo` from the configuration.
- `ParamsJSON` - The unquoted JSONified form of `params` from the configuration.
- `ParamsJSON` - The unquoted JSONified form of `params` from the
configuration.
- `Module` - `module` from the configuration.
### Bootstrap Command
@ -112,8 +116,8 @@ By default, Packer uses the following command to bootstrap Converge:
curl -s https://get.converge.sh | {{if .Sudo}}sudo {{end}}sh {{if ne .Version ""}}-s -- -v {{.Version}}{{end}}
```
This command can be customized using the `bootstrap_command` configuration. As you
can see from the default values above, the value of this configuration can
This command can be customized using the `bootstrap_command` configuration. As
you can see from the default values above, the value of this configuration can
contain various template variables:
- `Sudo` - the opposite of `prevent_bootstrap_sudo` from the configuration.

View File

@ -14,4 +14,5 @@ sidebar_current: 'docs-provisioners-custom'
Packer is extensible, allowing you to write new provisioners without having to
modify the core source code of Packer itself. Documentation for creating new
provisioners is covered in the [custom
provisioners](/docs/extending/custom-provisioners.html) page of the Packer plugin section.
provisioners](/docs/extending/custom-provisioners.html) page of the Packer
plugin section.

View File

@ -14,8 +14,8 @@ sidebar_current: 'docs-provisioners-file'
Type: `file`
The file Packer provisioner uploads files to machines built by Packer. The
recommended usage of the file provisioner is to use it to upload files, and then
use [shell provisioner](/docs/provisioners/shell.html) to move them to the
recommended usage of the file provisioner is to use it to upload files, and
then use [shell provisioner](/docs/provisioners/shell.html) to move them to the
proper place, set permissions, etc.
The file provisioner can upload both single files and complete directories.
@ -36,20 +36,20 @@ The available configuration options are listed below.
### Required
- `source` (string) - The path to a local file or directory to upload to
the machine. The path can be absolute or relative. If it is relative, it is
- `source` (string) - The path to a local file or directory to upload to the
machine. The path can be absolute or relative. If it is relative, it is
relative to the working directory when Packer is executed. If this is a
directory, the existence of a trailing slash is important. Read below on
uploading directories.
- `destination` (string) - The path where the file will be uploaded to in
the machine. This value must be a writable location and any parent
directories must already exist. If the source is a file, it's a good idea to
make the destination a file as well, but if you set your destination as a
directory, at least make sure that the destination ends in a trailing slash
so that Packer knows to use the source's basename in the final upload path.
Failure to do so may cause Packer to fail on file uploads. If the
destination file already exists, it will be overwritten.
- `destination` (string) - The path where the file will be uploaded to in the
machine. This value must be a writable location and any parent directories
must already exist. If the source is a file, it's a good idea to make the
destination a file as well, but if you set your destination as a directory,
at least make sure that the destination ends in a trailing slash so that
Packer knows to use the source's basename in the final upload path. Failure
to do so may cause Packer to fail on file uploads. If the destination file
already exists, it will be overwritten.
- `direction` (string) - The direction of the file transfer. This defaults to
"upload". If it is set to "download" then the file "source" in the machine
@ -68,8 +68,8 @@ The available configuration options are listed below.
## Directory Uploads
The file provisioner is also able to upload a complete directory to the remote
machine. When uploading a directory, there are a few important things you should
know.
machine. When uploading a directory, there are a few important things you
should know.
First, the destination directory must already exist. If you need to create it,
use a shell provisioner just prior to the file provisioner in order to create
@ -80,10 +80,10 @@ Next, the existence of a trailing slash on the source path will determine
whether the directory name will be embedded within the destination, or whether
the destination will be created. An example explains this best:
If the source is `/foo` (no trailing slash), and the destination is `/tmp`, then
the contents of `/foo` on the local machine will be uploaded to `/tmp/foo` on
the remote machine. The `foo` directory on the remote machine will be created by
Packer.
If the source is `/foo` (no trailing slash), and the destination is `/tmp`,
then the contents of `/foo` on the local machine will be uploaded to `/tmp/foo`
on the remote machine. The `foo` directory on the remote machine will be
created by Packer.
If the source, however, is `/foo/` (a trailing slash is present), and the
destination is `/tmp`, then the contents of `/foo` will be uploaded into `/tmp`
@ -97,11 +97,10 @@ the covers, rsync may or may not be used.
In general, local files used as the source **must** exist before Packer is run.
This is great for catching typos and ensuring that once a build is started,
that it will succeed. However, this also means that you can't generate a file
during your build and then upload it using the file provisioner later.
A convenient workaround is to upload a directory instead of a file. The
directory still must exist, but its contents don't. You can write your
generated file to the directory during the Packer run, and have it be uploaded
later.
during your build and then upload it using the file provisioner later. A
convenient workaround is to upload a directory instead of a file. The directory
still must exist, but its contents don't. You can write your generated file to
the directory during the Packer run, and have it be uploaded later.
## Symbolic link uploads

View File

@ -1,7 +1,6 @@
---
description: |
The PowerShell Packer provisioner runs PowerShell scripts on Windows
machines.
The PowerShell Packer provisioner runs PowerShell scripts on Windows machines.
It assumes that the communicator in use is WinRM.
layout: docs
page_title: 'PowerShell - Provisioners'
@ -13,9 +12,9 @@ sidebar_current: 'docs-provisioners-powershell'
Type: `powershell`
The PowerShell Packer provisioner runs PowerShell scripts on Windows machines.
It assumes that the communicator in use is WinRM. However, the provisioner
can work equally well (with a few caveats) when combined with the SSH
communicator. See the [section
It assumes that the communicator in use is WinRM. However, the provisioner can
work equally well (with a few caveats) when combined with the SSH communicator.
See the [section
below](/docs/provisioners/powershell.html#combining-the-powershell-provisioner-with-the-ssh-communicator)
for details.
@ -45,9 +44,9 @@ Exactly *one* of the following is required:
and so on. Inline scripts are the easiest way to pull off simple tasks
within the machine.
- `script` (string) - The path to a script to upload and execute in
the machine. This path can be absolute or relative. If it is relative, it
is relative to the working directory when Packer is executed.
- `script` (string) - The path to a script to upload and execute in the
machine. This path can be absolute or relative. If it is relative, it is
relative to the working directory when Packer is executed.
- `scripts` (array of strings) - An array of scripts to execute. The scripts
will be uploaded and executed in the order specified. Each script is
@ -68,21 +67,21 @@ Optional parameters:
```
The value of this is treated as [configuration
template](/docs/templates/engine.html). There are two
available variables: `Path`, which is the path to the script to run, and
`Vars`, which is the location of a temp file containing the list of
`environment_vars`, if configured.
template](/docs/templates/engine.html). There are two available variables:
`Path`, which is the path to the script to run, and `Vars`, which is the
location of a temp file containing the list of `environment_vars`, if
configured.
- `environment_vars` (array of strings) - An array of key/value pairs to
inject prior to the execute\_command. The format should be `key=value`.
Packer injects some environmental variables by default into the
environment, as well, which are covered in the section below.
If you are running on AWS, Azure or Google Compute and would like to access the generated
password that Packer uses to connect to the instance via
WinRM, you can use the template variable `{{.WinRMPassword}}` to set this
as an environment variable. For example:
environment, as well, which are covered in the section below. If you are
running on AWS, Azure or Google Compute and would like to access the
generated password that Packer uses to connect to the instance via WinRM,
you can use the template variable `{{.WinRMPassword}}` to set this as an
environment variable. For example:
```json
``` json
{
"type": "powershell",
"environment_vars": "WINRMPASS={{.WinRMPassword}}",
@ -98,12 +97,11 @@ Optional parameters:
```
The value of this is treated as [configuration
template](/docs/templates/engine.html). There are two
available variables: `Path`, which is the path to the script to run, and
`Vars`, which is the location of a temp file containing the list of
`environment_vars`. The value of both `Path` and `Vars` can be
manually configured by setting the values for `remote_path` and
`remote_env_var_path` respectively.
template](/docs/templates/engine.html). There are two available variables:
`Path`, which is the path to the script to run, and `Vars`, which is the
location of a temp file containing the list of `environment_vars`. The
value of both `Path` and `Vars` can be manually configured by setting the
values for `remote_path` and `remote_env_var_path` respectively.
If you use the SSH communicator and have changed your default shell, you
may need to modify your `execute_command` to make sure that the command is
@ -112,10 +110,10 @@ Optional parameters:
- `elevated_user` and `elevated_password` (string) - If specified, the
PowerShell script will be run with elevated privileges using the given
Windows user. If you are running a build on AWS, Azure or Google Compute and would like to run using
the generated password that Packer uses to connect to the instance via
WinRM, you may do so by using the template variable {{.WinRMPassword}}.
For example:
Windows user. If you are running a build on AWS, Azure or Google Compute
and would like to run using the generated password that Packer uses to
connect to the instance via WinRM, you may do so by using the template
variable {{.WinRMPassword}}. For example:
``` json
"elevated_user": "Administrator",
@ -124,32 +122,30 @@ Optional parameters:
- `remote_path` (string) - The path where the PowerShell script will be
uploaded to within the target build machine. This defaults to
`C:/Windows/Temp/script-UUID.ps1` where UUID is replaced with a
dynamically generated string that uniquely identifies the script.
`C:/Windows/Temp/script-UUID.ps1` where UUID is replaced with a dynamically
generated string that uniquely identifies the script.
This setting allows users to override the default upload location. The
value must be a writable location and any parent directories must
already exist.
value must be a writable location and any parent directories must already
exist.
- `remote_env_var_path` (string) - Environment variables required within
the remote environment are uploaded within a PowerShell script and then
enabled by 'dot sourcing' the script immediately prior to execution of
the main command or script.
- `remote_env_var_path` (string) - Environment variables required within the
remote environment are uploaded within a PowerShell script and then enabled
by 'dot sourcing' the script immediately prior to execution of the main
command or script.
The path the environment variables script will be uploaded to defaults to
`C:/Windows/Temp/packer-ps-env-vars-UUID.ps1` where UUID is replaced
with a dynamically generated string that uniquely identifies the
script.
`C:/Windows/Temp/packer-ps-env-vars-UUID.ps1` where UUID is replaced with a
dynamically generated string that uniquely identifies the script.
This setting allows users to override the location the environment
variable script is uploaded to. The value must be a writable location
and any parent directories must already exist.
This setting allows users to override the location the environment variable
script is uploaded to. The value must be a writable location and any parent
directories must already exist.
- `start_retry_timeout` (string) - The amount of time to attempt to *start*
the remote process. By default this is "5m" or 5 minutes. This setting
exists in order to deal with times when SSH may restart, such as a
system reboot. Set this to a higher value if reboots take a longer amount
of time.
exists in order to deal with times when SSH may restart, such as a system
reboot. Set this to a higher value if reboots take a longer amount of time.
- `valid_exit_codes` (list of ints) - Valid exit codes for the script. By
default this is just 0.
@ -160,8 +156,8 @@ In addition to being able to specify custom environmental variables using the
`environment_vars` configuration, the provisioner automatically defines certain
commonly useful environmental variables:
- `PACKER_BUILD_NAME` is set to the
[name of the build](/docs/templates/builders.html#named-builds) that Packer is running.
- `PACKER_BUILD_NAME` is set to the [name of the
build](/docs/templates/builders.html#named-builds) that Packer is running.
This is most useful when Packer is making multiple builds and you want to
distinguish them slightly from a common provisioning script.
@ -179,27 +175,24 @@ commonly useful environmental variables:
## Combining the PowerShell Provisioner with the SSH Communicator
The good news first. If you are using the
[Microsoft port of OpenSSH](https://github.com/PowerShell/Win32-OpenSSH/wiki)
then the provisioner should just work as expected - no extra configuration
effort is required.
The good news first. If you are using the [Microsoft port of
OpenSSH](https://github.com/PowerShell/Win32-OpenSSH/wiki) then the provisioner
should just work as expected - no extra configuration effort is required.
Now the caveats. If you are using an alternative configuration, and your SSH
connection lands you in a *nix shell on the remote host, then you will most
connection lands you in a \*nix shell on the remote host, then you will most
likely need to manually set the `execute_command`; The default
`execute_command` used by Packer will not work for you.
When configuring the command you will need to ensure that any dollar signs
or other characters that may be incorrectly interpreted by the remote shell
are escaped accordingly.
`execute_command` used by Packer will not work for you. When configuring the
command you will need to ensure that any dollar signs or other characters that
may be incorrectly interpreted by the remote shell are escaped accordingly.
The following example shows how the standard `execute_command` can be
reconfigured to work on a remote system with
[Cygwin/OpenSSH](https://cygwin.com/) installed.
The `execute_command` has each dollar sign backslash escaped so that it is
not interpreted by the remote Bash shell - Bash being the default shell for
Cygwin environments.
[Cygwin/OpenSSH](https://cygwin.com/) installed. The `execute_command` has each
dollar sign backslash escaped so that it is not interpreted by the remote Bash
shell - Bash being the default shell for Cygwin environments.
```json
``` json
"provisioners": [
{
"type": "powershell",
@ -211,20 +204,19 @@ Cygwin environments.
]
```
## Packer's Handling of Characters Special to PowerShell
The escape character in PowerShell is the `backtick`, also sometimes
referred to as the `grave accent`. When, and when not, to escape characters
special to PowerShell is probably best demonstrated with a series of examples.
The escape character in PowerShell is the `backtick`, also sometimes referred
to as the `grave accent`. When, and when not, to escape characters special to
PowerShell is probably best demonstrated with a series of examples.
### When To Escape...
Users need to deal with escaping characters special to PowerShell when they
appear *directly* in commands used in the `inline` PowerShell provisioner and
when they appear *directly* in the users own scripts.
Note that where double quotes appear within double quotes, the addition of
a backslash escape is required for the JSON template to be parsed correctly.
when they appear *directly* in the users own scripts. Note that where double
quotes appear within double quotes, the addition of a backslash escape is
required for the JSON template to be parsed correctly.
``` json
"provisioners": [
@ -242,21 +234,19 @@ a backslash escape is required for the JSON template to be parsed correctly.
The above snippet should result in the following output on the Packer console:
```
==> amazon-ebs: Provisioning with Powershell...
==> amazon-ebs: Provisioning with powershell script: /var/folders/15/d0f7gdg13rnd1cxp7tgmr55c0000gn/T/packer-powershell-provisioner508190439
amazon-ebs: A literal dollar $ must be escaped
amazon-ebs: A literal backtick ` must be escaped
amazon-ebs: Here "double quotes" must be escaped
amazon-ebs: Here 'single quotes' don't really need to be
amazon-ebs: escaped... but it doesn't hurt to do so.
```
==> amazon-ebs: Provisioning with Powershell...
==> amazon-ebs: Provisioning with powershell script: /var/folders/15/d0f7gdg13rnd1cxp7tgmr55c0000gn/T/packer-powershell-provisioner508190439
amazon-ebs: A literal dollar $ must be escaped
amazon-ebs: A literal backtick ` must be escaped
amazon-ebs: Here "double quotes" must be escaped
amazon-ebs: Here 'single quotes' don't really need to be
amazon-ebs: escaped... but it doesn't hurt to do so.
### When Not To Escape...
Special characters appearing in user environment variable values and in the
`elevated_user` and `elevated_password` fields will be automatically
dealt with for the user. There is no need to use escapes in these instances.
`elevated_user` and `elevated_password` fields will be automatically dealt with
for the user. There is no need to use escapes in these instances.
``` json
{
@ -296,16 +286,14 @@ dealt with for the user. There is no need to use escapes in these instances.
The above snippet should result in the following output on the Packer console:
```
==> amazon-ebs: Provisioning with Powershell...
==> amazon-ebs: Provisioning with powershell script: /var/folders/15/d0f7gdg13rnd1cxp7tgmr55c0000gn/T/packer-powershell-provisioner961728919
amazon-ebs: The dollar in the elevated_password is interpreted correctly
==> amazon-ebs: Provisioning with Powershell...
==> amazon-ebs: Provisioning with powershell script: /var/folders/15/d0f7gdg13rnd1cxp7tgmr55c0000gn/T/packer-powershell-provisioner142826554
amazon-ebs: In the following examples the special character is interpreted correctly:
amazon-ebs: The dollar in VAR1: A$Dollar
amazon-ebs: The backtick in VAR2: A`Backtick
amazon-ebs: The single quote in VAR3: A'SingleQuote
amazon-ebs: The double quote in VAR4: A"DoubleQuote
amazon-ebs: The dollar in VAR5 (expanded from a user var): My$tring
```
==> amazon-ebs: Provisioning with Powershell...
==> amazon-ebs: Provisioning with powershell script: /var/folders/15/d0f7gdg13rnd1cxp7tgmr55c0000gn/T/packer-powershell-provisioner961728919
amazon-ebs: The dollar in the elevated_password is interpreted correctly
==> amazon-ebs: Provisioning with Powershell...
==> amazon-ebs: Provisioning with powershell script: /var/folders/15/d0f7gdg13rnd1cxp7tgmr55c0000gn/T/packer-powershell-provisioner142826554
amazon-ebs: In the following examples the special character is interpreted correctly:
amazon-ebs: The dollar in VAR1: A$Dollar
amazon-ebs: The backtick in VAR2: A`Backtick
amazon-ebs: The single quote in VAR3: A'SingleQuote
amazon-ebs: The double quote in VAR4: A"DoubleQuote
amazon-ebs: The dollar in VAR5 (expanded from a user var): My$tring

View File

@ -1,10 +1,10 @@
---
description: |
The masterless Puppet Packer provisioner configures Puppet to run on the
machines by Packer from local modules and manifest files. Modules and
manifests can be uploaded from your local machine to the remote machine or can
simply use remote paths. Puppet is run in masterless mode, meaning it never
communicates to a Puppet master.
machines by Packer from local modules and manifest files. Modules and manifests
can be uploaded from your local machine to the remote machine or can simply use
remote paths. Puppet is run in masterless mode, meaning it never communicates
to a Puppet master.
layout: docs
page_title: 'Puppet Masterless - Provisioners'
sidebar_current: 'docs-provisioners-puppet-masterless'
@ -46,106 +46,103 @@ The reference of available configuration options is listed below.
Required parameters:
- `manifest_file` (string) - This is either a path to a puppet manifest
(`.pp` file) *or* a directory containing multiple manifests that puppet will
apply (the ["main
(`.pp` file) *or* a directory containing multiple manifests that puppet
will apply (the ["main
manifest"](https://docs.puppetlabs.com/puppet/latest/reference/dirs_manifest.html)).
These file(s) must exist on your local system and will be uploaded to the
remote machine.
Optional parameters:
- `execute_command` (string) - The command-line to execute Puppet. This also has
various [configuration template variables](/docs/templates/engine.html) available.
- `execute_command` (string) - The command-line to execute Puppet. This also
has various [configuration template variables](/docs/templates/engine.html)
available.
- `extra_arguments` (array of strings) - Additional options to
pass to the Puppet command. This allows for customization of
`execute_command` without having to completely replace
or subsume its contents, making forward-compatible customizations much
easier to maintain.
This string is lazy-evaluated so one can incorporate logic driven by template variables as
well as private elements of ExecuteTemplate (see source: provisioner/puppet-masterless/provisioner.go).
```
[
{{if ne "{{user environment}}" ""}}--environment={{user environment}}{{end}},
{{if ne ".ModulePath" ""}}--modulepath="{{.ModulePath}}{{.ModulePathJoiner}}$(puppet config print {{if ne "{{user `environment`}}" ""}}--environment={{user `environment`}}{{end}} modulepath)"{{end}}
]
```
- `extra_arguments` (array of strings) - Additional options to pass to the
Puppet command. This allows for customization of
`execute_command` without having to completely replace or subsume its
contents, making forward-compatible customizations much easier to maintain.
This string is lazy-evaluated so one can incorporate logic driven by
template variables as well as private elements of ExecuteTemplate (see
source: provisioner/puppet-masterless/provisioner.go).
[
{{if ne "{{user environment}}" ""}}--environment={{user environment}}{{end}},
{{if ne ".ModulePath" ""}}--modulepath="{{.ModulePath}}{{.ModulePathJoiner}}$(puppet config print {{if ne "{{user `environment`}}" ""}}--environment={{user `environment`}}{{end}} modulepath)"{{end}}
]
- `facter` (object of key:value strings) - Additional
[facts](https://puppetlabs.com/facter) to make
available to the Puppet run.
[facts](https://puppetlabs.com/facter) to make available to the Puppet run.
- `guest_os_type` (string) - The remote host's OS type ('windows' or 'unix') to
tailor command-line and path separators. (default: unix).
- `guest_os_type` (string) - The remote host's OS type ('windows' or 'unix')
to tailor command-line and path separators. (default: unix).
- `hiera_config_path` (string) - Local path to self-contained Hiera
data to be uploaded. NOTE: If you need data directories
they must be previously transferred with a File provisioner.
- `hiera_config_path` (string) - Local path to self-contained Hiera data to
be uploaded. NOTE: If you need data directories they must be previously
transferred with a File provisioner.
- `ignore_exit_codes` (boolean) - If true, Packer will ignore failures.
- `manifest_dir` (string) - Local directory with manifests to be
uploaded. This is useful if your main manifest uses imports, but the
directory might not contain the `manifest_file` itself.
- `manifest_dir` (string) - Local directory with manifests to be uploaded.
This is useful if your main manifest uses imports, but the directory might
not contain the `manifest_file` itself.
~&gt; `manifest_dir` is passed to Puppet as `--manifestdir` option.
This option was deprecated in puppet 3.6, and removed in puppet 4.0. If you have
multiple manifests you should use `manifest_file` instead.
~&gt; `manifest_dir` is passed to Puppet as `--manifestdir` option. This option
was deprecated in puppet 3.6, and removed in puppet 4.0. If you have multiple
manifests you should use `manifest_file` instead.
- `module_paths` (array of strings) - Array of local module
directories to be uploaded.
- `module_paths` (array of strings) - Array of local module directories to be
uploaded.
- `prevent_sudo` (boolean) - On Unix platforms Puppet is typically invoked with `sudo`. If true,
it will be omitted. (default: false)
- `prevent_sudo` (boolean) - On Unix platforms Puppet is typically invoked
with `sudo`. If true, it will be omitted. (default: false)
- `puppet_bin_dir` (string) - Path to the Puppet binary. Ideally the program
should be on the system (unix: `$PATH`, windows: `%PATH%`), but some builders (eg. Docker) do
not run profile-setup scripts and therefore PATH might be empty or minimal.
should be on the system (unix: `$PATH`, windows: `%PATH%`), but some
builders (eg. Docker) do not run profile-setup scripts and therefore PATH
might be empty or minimal.
- `staging_directory` (string) - Directory to where uploaded files
will be placed (unix: "/tmp/packer-puppet-masterless",
windows: "%SYSTEMROOT%/Temp/packer-puppet-masterless").
It doesn't need to pre-exist, but the parent must have permissions sufficient
for the account Packer connects as to create directories and write files.
Use a Shell provisioner to prepare the way if needed.
- `staging_directory` (string) - Directory to where uploaded files will be
placed (unix: "/tmp/packer-puppet-masterless", windows:
"%SYSTEMROOT%/Temp/packer-puppet-masterless"). It doesn't need to
pre-exist, but the parent must have permissions sufficient for the account
Packer connects as to create directories and write files. Use a Shell
provisioner to prepare the way if needed.
- `working_directory` (string) - Directory from which `execute_command` will be run.
If using Hiera files with relative paths, this option can be helpful. (default: `staging_directory`)
- `working_directory` (string) - Directory from which `execute_command` will
be run. If using Hiera files with relative paths, this option can be
helpful. (default: `staging_directory`)
## Execute Command
By default, Packer uses the following command (broken across multiple lines for
readability) to execute Puppet:
```
cd {{.WorkingDir}} &&
{{if ne .FacterVars ""}}{{.FacterVars}} {{end}}
{{if .Sudo}}sudo -E {{end}}
{{if ne .PuppetBinDir ""}}{{.PuppetBinDir}}/{{end}}
puppet apply --detailed-exitcodes
{{if .Debug}}--debug {{end}}
{{if ne .ModulePath ""}}--modulepath='{{.ModulePath}}' {{end}}
{{if ne .HieraConfigPath ""}}--hiera_config='{{.HieraConfigPath}}' {{end}}
{{if ne .ManifestDir ""}}--manifestdir='{{.ManifestDir}}' {{end}}
{{if ne .ExtraArguments ""}}{{.ExtraArguments}} {{end}}
{{.ManifestFile}}
```
cd {{.WorkingDir}} &&
{{if ne .FacterVars ""}}{{.FacterVars}} {{end}}
{{if .Sudo}}sudo -E {{end}}
{{if ne .PuppetBinDir ""}}{{.PuppetBinDir}}/{{end}}
puppet apply --detailed-exitcodes
{{if .Debug}}--debug {{end}}
{{if ne .ModulePath ""}}--modulepath='{{.ModulePath}}' {{end}}
{{if ne .HieraConfigPath ""}}--hiera_config='{{.HieraConfigPath}}' {{end}}
{{if ne .ManifestDir ""}}--manifestdir='{{.ManifestDir}}' {{end}}
{{if ne .ExtraArguments ""}}{{.ExtraArguments}} {{end}}
{{.ManifestFile}}
The following command is used if guest OS type is windows:
```
cd {{.WorkingDir}} &&
{{if ne .FacterVars ""}}{{.FacterVars}} && {{end}}
{{if ne .PuppetBinDir ""}}{{.PuppetBinDir}}/{{end}}
puppet apply --detailed-exitcodes
{{if .Debug}}--debug {{end}}
{{if ne .ModulePath ""}}--modulepath='{{.ModulePath}}' {{end}}
{{if ne .HieraConfigPath ""}}--hiera_config='{{.HieraConfigPath}}' {{end}}
{{if ne .ManifestDir ""}}--manifestdir='{{.ManifestDir}}' {{end}}
{{if ne .ExtraArguments ""}}{{.ExtraArguments}} {{end}}
{{.ManifestFile}}
```
cd {{.WorkingDir}} &&
{{if ne .FacterVars ""}}{{.FacterVars}} && {{end}}
{{if ne .PuppetBinDir ""}}{{.PuppetBinDir}}/{{end}}
puppet apply --detailed-exitcodes
{{if .Debug}}--debug {{end}}
{{if ne .ModulePath ""}}--modulepath='{{.ModulePath}}' {{end}}
{{if ne .HieraConfigPath ""}}--hiera_config='{{.HieraConfigPath}}' {{end}}
{{if ne .ManifestDir ""}}--manifestdir='{{.ManifestDir}}' {{end}}
{{if ne .ExtraArguments ""}}{{.ExtraArguments}} {{end}}
{{.ManifestFile}}
## Default Facts
@ -157,6 +154,7 @@ facts:
This is most useful when Packer is making multiple builds and you want to
distinguish them in your Hiera hierarchy.
- `packer_builder_type` is the type of the builder that was used to create the
machine that Puppet is running on. This is useful if you want to run only
certain parts of your Puppet code on systems built with certain builders.
- `packer_builder_type` is the type of the builder that was used to create
the machine that Puppet is running on. This is useful if you want to run
only certain parts of your Puppet code on systems built with certain
builders.

View File

@ -1,7 +1,7 @@
---
description: |
The puppet-server Packer provisioner provisions Packer machines with Puppet
by connecting to a Puppet master.
The puppet-server Packer provisioner provisions Packer machines with Puppet by
connecting to a Puppet master.
layout: docs
page_title: 'Puppet Server - Provisioners'
sidebar_current: 'docs-provisioners-puppet-server'
@ -11,8 +11,8 @@ sidebar_current: 'docs-provisioners-puppet-server'
Type: `puppet-server`
The `puppet-server` Packer provisioner provisions Packer machines with Puppet by
connecting to a Puppet master.
The `puppet-server` Packer provisioner provisions Packer machines with Puppet
by connecting to a Puppet master.
-&gt; **Note:** Puppet will *not* be installed automatically by this
provisioner. This provisioner expects that Puppet is already installed on the
@ -43,45 +43,45 @@ The provisioner takes various options. None are strictly required. They are
listed below:
- `client_cert_path` (string) - Path to the directory on your disk that
contains the client certificate for the node. This defaults to nothing,
in which case a client cert won't be uploaded.
contains the client certificate for the node. This defaults to nothing, in
which case a client cert won't be uploaded.
- `client_private_key_path` (string) - Path to the directory on your disk that
contains the client private key for the node. This defaults to nothing, in
which case a client private key won't be uploaded.
- `client_private_key_path` (string) - Path to the directory on your disk
that contains the client private key for the node. This defaults to
nothing, in which case a client private key won't be uploaded.
- `execute_command` (string) - The command-line to execute Puppet. This also has
various [configuration template variables](/docs/templates/engine.html) available.
- `execute_command` (string) - The command-line to execute Puppet. This also
has various [configuration template variables](/docs/templates/engine.html)
available.
- `extra_arguments` (array of strings) - Additional options to
pass to the Puppet command. This allows for customization of
`execute_command` without having to completely replace
or subsume its contents, making forward-compatible customizations much
easier to maintain.
This string is lazy-evaluated so one can incorporate logic driven by template variables as
well as private elements of ExecuteTemplate (see source: provisioner/puppet-server/provisioner.go).
```
[
{{if ne "{{user environment}}" ""}}--environment={{user environment}}{{end}}
]
```
- `extra_arguments` (array of strings) - Additional options to pass to the
Puppet command. This allows for customization of `execute_command` without
having to completely replace or subsume its contents, making
forward-compatible customizations much easier to maintain.
This string is lazy-evaluated so one can incorporate logic driven by
template variables as well as private elements of ExecuteTemplate (see
source: provisioner/puppet-server/provisioner.go).
[
{{if ne "{{user environment}}" ""}}--environment={{user environment}}{{end}}
]
- `facter` (object of key/value strings) - Additional
[facts](https://puppetlabs.com/facter) to make
available to the Puppet run.
[facts](https://puppetlabs.com/facter) to make available to the Puppet run.
- `guest_os_type` (string) - The remote host's OS type ('windows' or 'unix') to
tailor command-line and path separators. (default: unix).
- `guest_os_type` (string) - The remote host's OS type ('windows' or 'unix')
to tailor command-line and path separators. (default: unix).
- `ignore_exit_codes` (boolean) - If true, Packer will ignore failures.
- `prevent_sudo` (boolean) - On Unix platforms Puppet is typically invoked with `sudo`. If true,
it will be omitted. (default: false)
- `prevent_sudo` (boolean) - On Unix platforms Puppet is typically invoked
with `sudo`. If true, it will be omitted. (default: false)
- `puppet_bin_dir` (string) - Path to the Puppet binary. Ideally the program
should be on the system (unix: `$PATH`, windows: `%PATH%`), but some builders (eg. Docker) do
not run profile-setup scripts and therefore PATH might be empty or minimal.
should be on the system (unix: `$PATH`, windows: `%PATH%`), but some
builders (eg. Docker) do not run profile-setup scripts and therefore PATH
might be empty or minimal.
- `puppet_node` (string) - The name of the node. If this isn't set, the fully
qualified domain name will be used.
@ -89,49 +89,46 @@ listed below:
- `puppet_server` (string) - Hostname of the Puppet server. By default
"puppet" will be used.
- `staging_dir` (string) - Directory to where uploaded files
will be placed (unix: "/tmp/packer-puppet-masterless",
windows: "%SYSTEMROOT%/Temp/packer-puppet-masterless").
It doesn't need to pre-exist, but the parent must have permissions sufficient
for the account Packer connects as to create directories and write files.
Use a Shell provisioner to prepare the way if needed.
- `staging_dir` (string) - Directory to where uploaded files will be placed
(unix: "/tmp/packer-puppet-masterless", windows:
"%SYSTEMROOT%/Temp/packer-puppet-masterless"). It doesn't need to
pre-exist, but the parent must have permissions sufficient for the account
Packer connects as to create directories and write files. Use a Shell
provisioner to prepare the way if needed.
- `working_directory` (string) - Directory from which `execute_command` will be run.
If using Hiera files with relative paths, this option can be helpful. (default: `staging_directory`)
- `working_directory` (string) - Directory from which `execute_command` will
be run. If using Hiera files with relative paths, this option can be
helpful. (default: `staging_directory`)
## Execute Command
By default, Packer uses the following command (broken across multiple lines for
readability) to execute Puppet:
```
cd {{.WorkingDir}} &&
{{if ne .FacterVars ""}}{{.FacterVars}} {{end}}
{{if .Sudo}}sudo -E {{end}}
{{if ne .PuppetBinDir ""}}{{.PuppetBinDir}}/{{end}}
puppet agent --onetime --no-daemonize --detailed-exitcodes
{{if .Debug}}--debug {{end}}
{{if ne .PuppetServer ""}}--server='{{.PuppetServer}}' {{end}}
{{if ne .PuppetNode ""}}--certname='{{.PuppetNode}}' {{end}}
{{if ne .ClientCertPath ""}}--certdir='{{.ClientCertPath}}' {{end}}
{{if ne .ClientPrivateKeyPath ""}}--privatekeydir='{{.ClientPrivateKeyPath}}' {{end}}
{{if ne .ExtraArguments ""}}{{.ExtraArguments}} {{end}}
```
cd {{.WorkingDir}} &&
{{if ne .FacterVars ""}}{{.FacterVars}} {{end}}
{{if .Sudo}}sudo -E {{end}}
{{if ne .PuppetBinDir ""}}{{.PuppetBinDir}}/{{end}}
puppet agent --onetime --no-daemonize --detailed-exitcodes
{{if .Debug}}--debug {{end}}
{{if ne .PuppetServer ""}}--server='{{.PuppetServer}}' {{end}}
{{if ne .PuppetNode ""}}--certname='{{.PuppetNode}}' {{end}}
{{if ne .ClientCertPath ""}}--certdir='{{.ClientCertPath}}' {{end}}
{{if ne .ClientPrivateKeyPath ""}}--privatekeydir='{{.ClientPrivateKeyPath}}' {{end}}
{{if ne .ExtraArguments ""}}{{.ExtraArguments}} {{end}}
The following command is used if guest OS type is windows:
```
cd {{.WorkingDir}} &&
{{if ne .FacterVars ""}}{{.FacterVars}} && {{end}}
{{if ne .PuppetBinDir ""}}{{.PuppetBinDir}}/{{end}}
puppet agent --onetime --no-daemonize --detailed-exitcodes
{{if .Debug}}--debug {{end}}
{{if ne .PuppetServer ""}}--server='{{.PuppetServer}}' {{end}}
{{if ne .PuppetNode ""}}--certname='{{.PuppetNode}}' {{end}}
{{if ne .ClientCertPath ""}}--certdir='{{.ClientCertPath}}' {{end}}
{{if ne .ClientPrivateKeyPath ""}}--privatekeydir='{{.ClientPrivateKeyPath}}' {{end}}
{{if ne .ExtraArguments ""}}{{.ExtraArguments}} {{end}}
```
cd {{.WorkingDir}} &&
{{if ne .FacterVars ""}}{{.FacterVars}} && {{end}}
{{if ne .PuppetBinDir ""}}{{.PuppetBinDir}}/{{end}}
puppet agent --onetime --no-daemonize --detailed-exitcodes
{{if .Debug}}--debug {{end}}
{{if ne .PuppetServer ""}}--server='{{.PuppetServer}}' {{end}}
{{if ne .PuppetNode ""}}--certname='{{.PuppetNode}}' {{end}}
{{if ne .ClientCertPath ""}}--certdir='{{.ClientCertPath}}' {{end}}
{{if ne .ClientPrivateKeyPath ""}}--privatekeydir='{{.ClientPrivateKeyPath}}' {{end}}
{{if ne .ExtraArguments ""}}{{.ExtraArguments}} {{end}}
## Default Facts
@ -143,6 +140,7 @@ facts:
This is most useful when Packer is making multiple builds and you want to
distinguish them in your Hiera hierarchy.
- `packer_builder_type` is the type of the builder that was used to create the
machine that Puppet is running on. This is useful if you want to run only
certain parts of your Puppet code on systems built with certain builders.
- `packer_builder_type` is the type of the builder that was used to create
the machine that Puppet is running on. This is useful if you want to run
only certain parts of your Puppet code on systems built with certain
builders.

View File

@ -12,7 +12,8 @@ sidebar_current: 'docs-provisioners-salt-masterless'
Type: `salt-masterless`
The `salt-masterless` Packer provisioner provisions machines built by Packer
using [Salt](http://saltstack.com/) states, without connecting to a Salt master.
using [Salt](http://saltstack.com/) states, without connecting to a Salt
master.
## Basic Example
@ -28,7 +29,7 @@ The example below is fully functional.
## Configuration Reference
The reference of available configuration options is listed below. The only
required element is "local_state_tree".
required element is "local\_state\_tree".
Required:
@ -38,15 +39,16 @@ Required:
Optional:
- `bootstrap_args` (string) - Arguments to send to the bootstrap script. Usage
is somewhat documented on
- `bootstrap_args` (string) - Arguments to send to the bootstrap script.
Usage is somewhat documented on
[github](https://github.com/saltstack/salt-bootstrap), but the [script
itself](https://github.com/saltstack/salt-bootstrap/blob/develop/bootstrap-salt.sh)
has more detailed usage instructions. By default, no arguments are sent to
the script.
- `disable_sudo` (boolean) - By default, the bootstrap install command is prefixed with `sudo`. When using a
Docker builder, you will likely want to pass `true` since `sudo` is often not pre-installed.
- `disable_sudo` (boolean) - By default, the bootstrap install command is
prefixed with `sudo`. When using a Docker builder, you will likely want to
pass `true` since `sudo` is often not pre-installed.
- `remote_pillar_roots` (string) - The path to your remote [pillar
roots](http://docs.saltstack.com/ref/configuration/master.html#pillar-configuration).
@ -64,31 +66,34 @@ Optional:
Defaults to `state.highstate` if unspecified.
- `minion_config` (string) - The path to your local [minion config
file](http://docs.saltstack.com/ref/configuration/minion.html). This will be
uploaded to the `/etc/salt` on the remote. This option overrides the
file](http://docs.saltstack.com/ref/configuration/minion.html). This will
be uploaded to the `/etc/salt` on the remote. This option overrides the
`remote_state_tree` or `remote_pillar_roots` options.
- `grains_file` (string) - The path to your local [grains file](https://docs.saltstack.com/en/latest/topics/grains). This will be
- `grains_file` (string) - The path to your local [grains
file](https://docs.saltstack.com/en/latest/topics/grains). This will be
uploaded to `/etc/salt/grains` on the remote.
- `skip_bootstrap` (boolean) - By default the salt provisioner runs [salt
bootstrap](https://github.com/saltstack/salt-bootstrap) to install salt. Set
this to true to skip this step.
bootstrap](https://github.com/saltstack/salt-bootstrap) to install salt.
Set this to true to skip this step.
- `temp_config_dir` (string) - Where your local state tree will be copied
before moving to the `/srv/salt` directory. Default is `/tmp/salt`.
- `no_exit_on_failure` (boolean) - Packer will exit if the `salt-call` command
fails. Set this option to true to ignore Salt failures.
- `no_exit_on_failure` (boolean) - Packer will exit if the `salt-call`
command fails. Set this option to true to ignore Salt failures.
- `log_level` (string) - Set the logging level for the `salt-call` run.
- `salt_call_args` (string) - Additional arguments to pass directly to `salt-call`. See
[salt-call](https://docs.saltstack.com/ref/cli/salt-call.html) documentation for more
information. By default no additional arguments (besides the ones Packer generates)
are passed to `salt-call`.
- `salt_call_args` (string) - Additional arguments to pass directly to
`salt-call`. See
[salt-call](https://docs.saltstack.com/ref/cli/salt-call.html)
documentation for more information. By default no additional arguments
(besides the ones Packer generates) are passed to `salt-call`.
- `salt_bin_dir` (string) - Path to the `salt-call` executable. Useful if it is not
on the PATH.
- `salt_bin_dir` (string) - Path to the `salt-call` executable. Useful if it
is not on the PATH.
- `guest_os_type` (string) - The target guest OS type, either "unix" or "windows".
- `guest_os_type` (string) - The target guest OS type, either "unix" or
"windows".

View File

@ -1,9 +1,9 @@
---
description: |
shell-local will run a shell script of your choosing on the machine where Packer
is being run - in other words, shell-local will run the shell script on your
build server, or your desktop, etc., rather than the remote/guest machine being
provisioned by Packer.
shell-local will run a shell script of your choosing on the machine where
Packer is being run - in other words, shell-local will run the shell script on
your build server, or your desktop, etc., rather than the remote/guest machine
being provisioned by Packer.
layout: docs
page_title: 'Shell (Local) - Provisioners'
sidebar_current: 'docs-provisioners-shell-local'
@ -13,13 +13,13 @@ sidebar_current: 'docs-provisioners-shell-local'
Type: `shell-local`
shell-local will run a shell script of your choosing on the machine where Packer
is being run - in other words, shell-local will run the shell script on your
build server, or your desktop, etc., rather than the remote/guest machine being
provisioned by Packer.
shell-local will run a shell script of your choosing on the machine where
Packer is being run - in other words, shell-local will run the shell script on
your build server, or your desktop, etc., rather than the remote/guest machine
being provisioned by Packer.
The [remote shell](/docs/provisioners/shell.html) provisioner executes
shell scripts on a remote machine.
The [remote shell](/docs/provisioners/shell.html) provisioner executes shell
scripts on a remote machine.
## Basic Example
@ -39,16 +39,16 @@ required element is "command".
Exactly *one* of the following is required:
- `command` (string) - This is a single command to execute. It will be written
to a temporary file and run using the `execute_command` call below.
- `command` (string) - This is a single command to execute. It will be
written to a temporary file and run using the `execute_command` call below.
If you are building a windows vm on AWS, Azure or Google Compute and would
like to access the generated password that Packer uses to connect to the
instance via WinRM, you can use the template variable `{{.WinRMPassword}}`
to set this as an environment variable.
- `inline` (array of strings) - This is an array of commands to execute. The
commands are concatenated by newlines and turned into a single file, so they
are all executed within the same context. This allows you to change
commands are concatenated by newlines and turned into a single file, so
they are all executed within the same context. This allows you to change
directories in one command and use something in the directory in the next
and so on. Inline scripts are the easiest way to pull off simple tasks
within the machine.
@ -66,20 +66,20 @@ Optional parameters:
- `environment_vars` (array of strings) - An array of key/value pairs to
inject prior to the `execute_command`. The format should be `key=value`.
Packer injects some environmental variables by default into the environment,
as well, which are covered in the section below. If you are building a
windows vm on AWS, Azure or Google Compute and would like to access the
generated password that Packer uses to connect to the instance via WinRM,
you can use the template variable `{{.WinRMPassword}}` to set this as an
environment variable. For example:
Packer injects some environmental variables by default into the
environment, as well, which are covered in the section below. If you are
building a windows vm on AWS, Azure or Google Compute and would like to
access the generated password that Packer uses to connect to the instance
via WinRM, you can use the template variable `{{.WinRMPassword}}` to set
this as an environment variable. For example:
`"environment_vars": "WINRMPASS={{.WinRMPassword}}"`
- `execute_command` (array of strings) - The command used to execute the script.
By default this is `["/bin/sh", "-c", "{{.Vars}}", "{{.Script}}"]`
on unix and `["cmd", "/c", "{{.Vars}}", "{{.Script}}"]` on windows.
This is treated as a [template engine](/docs/templates/engine.html).
There are two available variables: `Script`, which is the path to the script
to run, and `Vars`, which is the list of `environment_vars`, if configured.
- `execute_command` (array of strings) - The command used to execute the
script. By default this is `["/bin/sh", "-c", "{{.Vars}}", "{{.Script}}"]`
on unix and `["cmd", "/c", "{{.Vars}}", "{{.Script}}"]` on windows. This is
treated as a [template engine](/docs/templates/engine.html). There are two
available variables: `Script`, which is the path to the script to run, and
`Vars`, which is the list of `environment_vars`, if configured.
If you choose to set this option, make sure that the first element in the
array is the shell program you want to use (for example, "sh"), and a later
@ -102,43 +102,46 @@ Optional parameters:
to set this as an environment variable.
- `inline_shebang` (string) - The
[shebang](http://en.wikipedia.org/wiki/Shebang_%28Unix%29) value to use when
running commands specified by `inline`. By default, this is `/bin/sh -e`. If
you're not using `inline`, then this configuration has no effect.
**Important:** If you customize this, be sure to include something like the
`-e` flag, otherwise individual steps failing won't fail the provisioner.
[shebang](http://en.wikipedia.org/wiki/Shebang_%28Unix%29) value to use
when running commands specified by `inline`. By default, this is
`/bin/sh -e`. If you're not using `inline`, then this configuration has no
effect. **Important:** If you customize this, be sure to include something
like the `-e` flag, otherwise individual steps failing won't fail the
provisioner.
- `only_on` (array of strings) - This is an array of
[runtime operating systems](https://golang.org/doc/install/source#environment)
where `shell-local` will execute. This allows you to execute `shell-local`
*only* on specific operating systems. By default, shell-local will always run
if `only_on` is not set."
- `only_on` (array of strings) - This is an array of [runtime operating
systems](https://golang.org/doc/install/source#environment) where
`shell-local` will execute. This allows you to execute `shell-local` *only*
on specific operating systems. By default, shell-local will always run if
`only_on` is not set."
- `use_linux_pathing` (bool) - This is only relevant to windows hosts. If you
are running Packer in a Windows environment with the Windows Subsystem for
Linux feature enabled, and would like to invoke a bash script rather than
invoking a Cmd script, you'll need to set this flag to true; it tells Packer
to use the linux subsystem path for your script rather than the Windows path.
(e.g. /mnt/c/path/to/your/file instead of C:/path/to/your/file). Please see
the example below for more guidance on how to use this feature. If you are
not on a Windows host, or you do not intend to use the shell-local
provisioner to run a bash script, please ignore this option.
- `use_linux_pathing` (bool) - This is only relevant to windows hosts. If you
are running Packer in a Windows environment with the Windows Subsystem for
Linux feature enabled, and would like to invoke a bash script rather than
invoking a Cmd script, you'll need to set this flag to true; it tells
Packer to use the linux subsystem path for your script rather than the
Windows path. (e.g. /mnt/c/path/to/your/file instead of
C:/path/to/your/file). Please see the example below for more guidance on
how to use this feature. If you are not on a Windows host, or you do not
intend to use the shell-local provisioner to run a bash script, please
ignore this option.
## Execute Command
To many new users, the `execute_command` is puzzling. However, it provides an
important function: customization of how the command is executed. The most
common use case for this is dealing with **sudo password prompts**. You may also
need to customize this if you use a non-POSIX shell, such as `tcsh` on FreeBSD.
common use case for this is dealing with **sudo password prompts**. You may
also need to customize this if you use a non-POSIX shell, such as `tcsh` on
FreeBSD.
### The Windows Linux Subsystem
The shell-local provisioner was designed with the idea of allowing you to run
commands in your local operating system's native shell. For Windows, we've
assumed in our defaults that this is Cmd. However, it is possible to run a
bash script as part of the Windows Linux Subsystem from the shell-local
provisioner, by modifying the `execute_command` and the `use_linux_pathing`
options in the provisioner config.
assumed in our defaults that this is Cmd. However, it is possible to run a bash
script as part of the Windows Linux Subsystem from the shell-local provisioner,
by modifying the `execute_command` and the `use_linux_pathing` options in the
provisioner config.
The example below is a fully functional test config.
@ -149,32 +152,30 @@ options instead.
Please note that the WSL is a beta feature, and this tool is not guaranteed to
work as you expect it to.
```
{
"builders": [
{
"type": "null",
"communicator": "none"
}
],
"provisioners": [
{
"type": "shell-local",
"environment_vars": ["PROVISIONERTEST=ProvisionerTest1"],
"execute_command": ["bash", "-c", "{{.Vars}} {{.Script}}"],
"use_linux_pathing": true,
"scripts": ["C:/Users/me/scripts/example_bash.sh"]
},
{
"type": "shell-local",
"environment_vars": ["PROVISIONERTEST=ProvisionerTest2"],
"execute_command": ["bash", "-c", "{{.Vars}} {{.Script}}"],
"use_linux_pathing": true,
"script": "C:/Users/me/scripts/example_bash.sh"
}
]
}
```
{
"builders": [
{
"type": "null",
"communicator": "none"
}
],
"provisioners": [
{
"type": "shell-local",
"environment_vars": ["PROVISIONERTEST=ProvisionerTest1"],
"execute_command": ["bash", "-c", "{{.Vars}} {{.Script}}"],
"use_linux_pathing": true,
"scripts": ["C:/Users/me/scripts/example_bash.sh"]
},
{
"type": "shell-local",
"environment_vars": ["PROVISIONERTEST=ProvisionerTest2"],
"execute_command": ["bash", "-c", "{{.Vars}} {{.Script}}"],
"use_linux_pathing": true,
"script": "C:/Users/me/scripts/example_bash.sh"
}
]
}
## Default Environmental Variables
@ -186,9 +187,10 @@ commonly useful environmental variables:
This is most useful when Packer is making multiple builds and you want to
distinguish them slightly from a common provisioning script.
- `PACKER_BUILDER_TYPE` is the type of the builder that was used to create the
machine that the script is running on. This is useful if you want to run
only certain parts of the script on systems built with certain builders.
- `PACKER_BUILDER_TYPE` is the type of the builder that was used to create
the machine that the script is running on. This is useful if you want to
run only certain parts of the script on systems built with certain
builders.
- `PACKER_HTTP_ADDR` If using a builder that provides an http server for file
transfer (such as hyperv, parallels, qemu, virtualbox, and vmware), this
@ -201,8 +203,8 @@ commonly useful environmental variables:
Whether you use the `inline` option, or pass it a direct `script` or `scripts`,
it is important to understand a few things about how the shell-local
provisioner works to run it safely and easily. This understanding will save
you much time in the process.
provisioner works to run it safely and easily. This understanding will save you
much time in the process.
### Once Per Builder
@ -218,103 +220,82 @@ are cleaned up.
For a shell script, that means the script **must** exit with a zero code. You
*must* be extra careful to `exit 0` when necessary.
## Usage Examples:
Example of running a .cmd file on windows:
```
{
"type": "shell-local",
"environment_vars": ["SHELLLOCALTEST=ShellTest1"],
"scripts": ["./scripts/test_cmd.cmd"]
},
```
{
"type": "shell-local",
"environment_vars": ["SHELLLOCALTEST=ShellTest1"],
"scripts": ["./scripts/test_cmd.cmd"]
},
Contents of "test_cmd.cmd":
Contents of "test\_cmd.cmd":
```
echo %SHELLLOCALTEST%
```
echo %SHELLLOCALTEST%
Example of running an inline command on windows:
Required customization: tempfile_extension
Example of running an inline command on windows: Required customization:
tempfile\_extension
```
{
"type": "shell-local",
"environment_vars": ["SHELLLOCALTEST=ShellTest2"],
"tempfile_extension": ".cmd",
"inline": ["echo %SHELLLOCALTEST%"]
},
```
{
"type": "shell-local",
"environment_vars": ["SHELLLOCALTEST=ShellTest2"],
"tempfile_extension": ".cmd",
"inline": ["echo %SHELLLOCALTEST%"]
},
Example of running a bash command on windows using WSL:
Required customizations: use_linux_pathing and execute_command
Example of running a bash command on windows using WSL: Required
customizations: use\_linux\_pathing and execute\_command
```
{
"type": "shell-local",
"environment_vars": ["SHELLLOCALTEST=ShellTest3"],
"execute_command": ["bash", "-c", "{{.Vars}} {{.Script}}"],
"use_linux_pathing": true,
"script": "./scripts/example_bash.sh"
}
```
{
"type": "shell-local",
"environment_vars": ["SHELLLOCALTEST=ShellTest3"],
"execute_command": ["bash", "-c", "{{.Vars}} {{.Script}}"],
"use_linux_pathing": true,
"script": "./scripts/example_bash.sh"
}
Contents of "example_bash.sh":
Contents of "example\_bash.sh":
```
#!/bin/bash
echo $SHELLLOCALTEST
```
#!/bin/bash
echo $SHELLLOCALTEST
Example of running a powershell script on windows:
Required customizations: env_var_format and execute_command
Example of running a powershell script on windows: Required customizations:
env\_var\_format and execute\_command
```
{
"type": "shell-local",
"environment_vars": ["SHELLLOCALTEST=ShellTest4"],
"execute_command": ["powershell.exe", "{{.Vars}} {{.Script}}"],
"env_var_format": "$env:%s=\"%s\"; ",
}
```
{
"type": "shell-local",
"environment_vars": ["SHELLLOCALTEST=ShellTest4"],
"execute_command": ["powershell.exe", "{{.Vars}} {{.Script}}"],
"env_var_format": "$env:%s=\"%s\"; ",
}
Example of running a powershell script on windows as "inline":
Required customizations: env_var_format, tempfile_extension, and execute_command
```
{
"type": "shell-local",
"tempfile_extension": ".ps1",
"environment_vars": ["SHELLLOCALTEST=ShellTest5"],
"execute_command": ["powershell.exe", "{{.Vars}} {{.Script}}"],
"env_var_format": "$env:%s=\"%s\"; ",
"inline": ["write-output $env:SHELLLOCALTEST"]
}
```
Example of running a powershell script on windows as "inline": Required
customizations: env\_var\_format, tempfile\_extension, and execute\_command
{
"type": "shell-local",
"tempfile_extension": ".ps1",
"environment_vars": ["SHELLLOCALTEST=ShellTest5"],
"execute_command": ["powershell.exe", "{{.Vars}} {{.Script}}"],
"env_var_format": "$env:%s=\"%s\"; ",
"inline": ["write-output $env:SHELLLOCALTEST"]
}
Example of running a bash script on linux:
```
{
"type": "shell-local",
"environment_vars": ["PROVISIONERTEST=ProvisionerTest1"],
"scripts": ["./scripts/dummy_bash.sh"]
}
```
{
"type": "shell-local",
"environment_vars": ["PROVISIONERTEST=ProvisionerTest1"],
"scripts": ["./scripts/dummy_bash.sh"]
}
Example of running a bash "inline" on linux:
```
{
"type": "shell-local",
"environment_vars": ["PROVISIONERTEST=ProvisionerTest2"],
"inline": ["echo hello",
"echo $PROVISIONERTEST"]
}
```
{
"type": "shell-local",
"environment_vars": ["PROVISIONERTEST=ProvisionerTest2"],
"inline": ["echo hello",
"echo $PROVISIONERTEST"]
}

View File

@ -34,19 +34,20 @@ The example below is fully functional.
## Configuration Reference
The reference of available configuration options is listed below. The only
required element is either "inline" or "script". Every other option is optional.
required element is either "inline" or "script". Every other option is
optional.
Exactly *one* of the following is required:
- `inline` (array of strings) - This is an array of commands to execute. The
commands are concatenated by newlines and turned into a single file, so they
are all executed within the same context. This allows you to change
commands are concatenated by newlines and turned into a single file, so
they are all executed within the same context. This allows you to change
directories in one command and use something in the directory in the next
and so on. Inline scripts are the easiest way to pull off simple tasks
within the machine.
- `script` (string) - The path to a script to upload and execute in
the machine. This path can be absolute or relative. If it is relative, it is
- `script` (string) - The path to a script to upload and execute in the
machine. This path can be absolute or relative. If it is relative, it is
relative to the working directory when Packer is executed.
- `scripts` (array of strings) - An array of scripts to execute. The scripts
@ -56,54 +57,56 @@ Exactly *one* of the following is required:
Optional parameters:
- `binary` (boolean) - If true, specifies that the script(s) are binary files,
and Packer should therefore not convert Windows line endings to Unix line
endings (if there are any). By default this is false.
- `binary` (boolean) - If true, specifies that the script(s) are binary
files, and Packer should therefore not convert Windows line endings to Unix
line endings (if there are any). By default this is false.
- `environment_vars` (array of strings) - An array of key/value pairs to
inject prior to the execute\_command. The format should be `key=value`.
Packer injects some environmental variables by default into the environment,
as well, which are covered in the section below.
Packer injects some environmental variables by default into the
environment, as well, which are covered in the section below.
- `use_env_var_file` (boolean) - If true, Packer will write your environment
variables to a tempfile and source them from that file, rather than
declaring them inline in our execute_command. The default `execute_command`
will be `chmod +x {{.Path}}; . {{.EnvVarFile}} && {{.Path}}`. This option is
declaring them inline in our execute\_command. The default
`execute_command` will be
`chmod +x {{.Path}}; . {{.EnvVarFile}} && {{.Path}}`. This option is
unnecessary for most cases, but if you have extra quoting in your custom
`execute_command`, then this may be unnecessary for proper script execution.
Default: false.
`execute_command`, then this may be unnecessary for proper script
execution. Default: false.
- `execute_command` (string) - The command to use to execute the script. By
default this is `chmod +x {{ .Path }}; {{ .Vars }} {{ .Path }}`, unless the
user has set `"use_env_var_file": true` -- in that case, the default
`execute_command` is `chmod +x {{.Path}}; . {{.EnvVarFile}} && {{.Path}}`.
The value of this is treated as a
[configuration template](/docs/templates/engine.html). There are three
available variables:
* `Path` is the path to the script to run
* `Vars` is the list of `environment_vars`, if configured.
* `EnvVarFile` is the path to the file containing env vars, if
`use_env_var_file` is true.
- `expect_disconnect` (boolean) - Defaults to `false`. Whether to error if the
server disconnects us. A disconnect might happen if you restart the ssh
The value of this is treated as a [configuration
template](/docs/templates/engine.html). There are three available
variables:
- `Path` is the path to the script to run
- `Vars` is the list of `environment_vars`, if configured.
- `EnvVarFile` is the path to the file containing env vars, if
`use_env_var_file` is true.
- `expect_disconnect` (boolean) - Defaults to `false`. Whether to error if
the server disconnects us. A disconnect might happen if you restart the ssh
server or reboot the host.
- `inline_shebang` (string) - The
[shebang](https://en.wikipedia.org/wiki/Shebang_%28Unix%29) value to use when
running commands specified by `inline`. By default, this is `/bin/sh -e`. If
you're not using `inline`, then this configuration has no effect.
**Important:** If you customize this, be sure to include something like the
`-e` flag, otherwise individual steps failing won't fail the provisioner.
[shebang](https://en.wikipedia.org/wiki/Shebang_%28Unix%29) value to use
when running commands specified by `inline`. By default, this is
`/bin/sh -e`. If you're not using `inline`, then this configuration has no
effect. **Important:** If you customize this, be sure to include something
like the `-e` flag, otherwise individual steps failing won't fail the
provisioner.
- `remote_folder` (string) - The folder where the uploaded script will reside on
the machine. This defaults to '/tmp'.
- `remote_folder` (string) - The folder where the uploaded script will reside
on the machine. This defaults to '/tmp'.
- `remote_file` (string) - The filename the uploaded script will have on the machine.
This defaults to 'script\_nnn.sh'.
- `remote_file` (string) - The filename the uploaded script will have on the
machine. This defaults to 'script\_nnn.sh'.
- `remote_path` (string) - The full path to the uploaded script will have on the
machine. By default this is remote\_folder/remote\_file, if set this option will
override both remote\_folder and remote\_file.
- `remote_path` (string) - The full path to the uploaded script will have on
the machine. By default this is remote\_folder/remote\_file, if set this
option will override both remote\_folder and remote\_file.
- `skip_clean` (boolean) - If true, specifies that the helper scripts
uploaded to the system will not be removed by Packer. This defaults to
@ -111,16 +114,16 @@ Optional parameters:
- `start_retry_timeout` (string) - The amount of time to attempt to *start*
the remote process. By default this is `5m` or 5 minutes. This setting
exists in order to deal with times when SSH may restart, such as a
system reboot. Set this to a higher value if reboots take a longer amount
of time.
exists in order to deal with times when SSH may restart, such as a system
reboot. Set this to a higher value if reboots take a longer amount of time.
## Execute Command Example
To many new users, the `execute_command` is puzzling. However, it provides an
important function: customization of how the command is executed. The most
common use case for this is dealing with **sudo password prompts**. You may also
need to customize this if you use a non-POSIX shell, such as `tcsh` on FreeBSD.
common use case for this is dealing with **sudo password prompts**. You may
also need to customize this if you use a non-POSIX shell, such as `tcsh` on
FreeBSD.
### Sudo Example
@ -135,7 +138,8 @@ Some operating systems default to a non-root user. For example if you login as
The `-S` flag tells `sudo` to read the password from stdin, which in this case
is being piped in with the value of `packer`.
The above example won't work if your environment vars contain spaces or single quotes; in these cases try removing the single quotes:
The above example won't work if your environment vars contain spaces or single
quotes; in these cases try removing the single quotes:
``` text
"echo 'packer' | sudo -S env {{ .Vars }} {{ .Path }}"
@ -146,8 +150,8 @@ privileges without worrying about password prompts.
### FreeBSD Example
FreeBSD's default shell is `tcsh`, which deviates from POSIX semantics. In order
for packer to pass environment variables you will need to change the
FreeBSD's default shell is `tcsh`, which deviates from POSIX semantics. In
order for packer to pass environment variables you will need to change the
`execute_command` to:
``` text
@ -162,14 +166,15 @@ In addition to being able to specify custom environmental variables using the
`environment_vars` configuration, the provisioner automatically defines certain
commonly useful environmental variables:
- `PACKER_BUILD_NAME` is set to the
[name of the build](/docs/templates/builders.html#named-builds) that Packer is running.
- `PACKER_BUILD_NAME` is set to the [name of the
build](/docs/templates/builders.html#named-builds) that Packer is running.
This is most useful when Packer is making multiple builds and you want to
distinguish them slightly from a common provisioning script.
- `PACKER_BUILDER_TYPE` is the type of the builder that was used to create the
machine that the script is running on. This is useful if you want to run
only certain parts of the script on systems built with certain builders.
- `PACKER_BUILDER_TYPE` is the type of the builder that was used to create
the machine that the script is running on. This is useful if you want to
run only certain parts of the script on systems built with certain
builders.
- `PACKER_HTTP_ADDR` If using a builder that provides an http server for file
transfer (such as hyperv, parallels, qemu, virtualbox, and vmware), this
@ -190,7 +195,8 @@ scripts. The amount of time the provisioner will wait is configured using
Sometimes, when executing a command like `reboot`, the shell script will return
and Packer will start executing the next one before SSH actually quits and the
machine restarts. For this, put use "pause\_before" to make Packer wait before executing the next script:
machine restarts. For this, put use "pause\_before" to make Packer wait before
executing the next script:
``` json
{
@ -202,8 +208,8 @@ machine restarts. For this, put use "pause\_before" to make Packer wait before e
Some OS configurations don't properly kill all network connections on reboot,
causing the provisioner to hang despite a reboot occurring. In this case, make
sure you shut down the network interfaces on reboot or in your shell script. For
example, on Gentoo:
sure you shut down the network interfaces on reboot or in your shell script.
For example, on Gentoo:
``` text
/etc/init.d/net.eth0 stop
@ -214,9 +220,9 @@ example, on Gentoo:
Some provisioning requires connecting to remote SSH servers from within the
packer instance. The below example is for pulling code from a private git
repository utilizing openssh on the client. Make sure you are running
`ssh-agent` and add your git repo ssh keys into it using `ssh-add /path/to/key`.
When the packer instance needs access to the ssh keys the agent will forward the
request back to your `ssh-agent`.
`ssh-agent` and add your git repo ssh keys into it using
`ssh-add /path/to/key`. When the packer instance needs access to the ssh keys
the agent will forward the request back to your `ssh-agent`.
Note: when provisioning via git you should add the git server keys into the
`~/.ssh/known_hosts` file otherwise the git command could hang awaiting input.
@ -264,11 +270,11 @@ would be:
*My builds don't always work the same*
- Some distributions start the SSH daemon before other core services which can
create race conditions. Your first provisioner can tell the machine to wait
until it completely boots.
- Some distributions start the SSH daemon before other core services which
can create race conditions. Your first provisioner can tell the machine to
wait until it completely boots.
```json
``` json
{
"type": "shell",
"inline": [ "sleep 10" ]
@ -277,11 +283,10 @@ would be:
## Quoting Environment Variables
Packer manages quoting for you, so you should't have to worry about it.
Below is an example of packer template inputs and what you should expect to get
out:
Packer manages quoting for you, so you should't have to worry about it. Below
is an example of packer template inputs and what you should expect to get out:
```json
``` json
"provisioners": [
{
"type": "shell",
@ -304,12 +309,10 @@ out:
Output:
```
docker: FOO is foo
docker: BAR is bar's
docker: BAZ is baz=baz
docker: QUX is =qux
docker: FOOBAR is foo bar
docker: FOOBARBAZ is 'foo bar baz'
docker: QUX2 is "qux"
```
docker: FOO is foo
docker: BAR is bar's
docker: BAZ is baz=baz
docker: QUX is =qux
docker: FOOBAR is foo bar
docker: FOOBARBAZ is 'foo bar baz'
docker: QUX2 is "qux"

View File

@ -19,7 +19,8 @@ provisioner helps to ease that process.
Packer expects the machine to be ready to continue provisioning after it
reboots. Packer detects that the reboot has completed by making an RPC call
through the Windows Remote Management (WinRM) service, not by ACPI functions, so Windows must be completely booted in order to continue.
through the Windows Remote Management (WinRM) service, not by ACPI functions,
so Windows must be completely booted in order to continue.
## Basic Example

View File

@ -28,19 +28,20 @@ The example below is fully functional.
## Configuration Reference
The reference of available configuration options is listed below. The only
required element is either "inline" or "script". Every other option is optional.
required element is either "inline" or "script". Every other option is
optional.
Exactly *one* of the following is required:
- `inline` (array of strings) - This is an array of commands to execute. The
commands are concatenated by newlines and turned into a single file, so they
are all executed within the same context. This allows you to change
commands are concatenated by newlines and turned into a single file, so
they are all executed within the same context. This allows you to change
directories in one command and use something in the directory in the next
and so on. Inline scripts are the easiest way to pull off simple tasks
within the machine.
- `script` (string) - The path to a script to upload and execute in
the machine. This path can be absolute or relative. If it is relative, it is
- `script` (string) - The path to a script to upload and execute in the
machine. This path can be absolute or relative. If it is relative, it is
relative to the working directory when Packer is executed.
- `scripts` (array of strings) - An array of scripts to execute. The scripts
@ -50,30 +51,29 @@ Exactly *one* of the following is required:
Optional parameters:
- `binary` (boolean) - If true, specifies that the script(s) are binary files,
and Packer should therefore not convert Windows line endings to Unix line
endings (if there are any). By default this is false.
- `binary` (boolean) - If true, specifies that the script(s) are binary
files, and Packer should therefore not convert Windows line endings to Unix
line endings (if there are any). By default this is false.
- `environment_vars` (array of strings) - An array of key/value pairs to
inject prior to the execute\_command. The format should be `key=value`.
Packer injects some environmental variables by default into the environment,
as well, which are covered in the section below.
Packer injects some environmental variables by default into the
environment, as well, which are covered in the section below.
- `execute_command` (string) - The command to use to execute the script. By
default this is `{{ .Vars }}"{{ .Path }}"`. The value of this is treated as
[template engine](/docs/templates/engine.html).
There are two available variables: `Path`, which is the path to the script
to run, and `Vars`, which is the list of `environment_vars`, if configured.
[template engine](/docs/templates/engine.html). There are two available
variables: `Path`, which is the path to the script to run, and `Vars`,
which is the list of `environment_vars`, if configured.
- `remote_path` (string) - The path where the script will be uploaded to in
the machine. This defaults to "c:/Windows/Temp/script.bat". This value must be a
writable location and any parent directories must already exist.
the machine. This defaults to "c:/Windows/Temp/script.bat". This value must
be a writable location and any parent directories must already exist.
- `start_retry_timeout` (string) - The amount of time to attempt to *start*
the remote process. By default this is "5m" or 5 minutes. This setting
exists in order to deal with times when SSH may restart, such as a
system reboot. Set this to a higher value if reboots take a longer amount
of time.
exists in order to deal with times when SSH may restart, such as a system
reboot. Set this to a higher value if reboots take a longer amount of time.
## Default Environmental Variables
@ -81,14 +81,15 @@ In addition to being able to specify custom environmental variables using the
`environment_vars` configuration, the provisioner automatically defines certain
commonly useful environmental variables:
- `PACKER_BUILD_NAME` is set to the
[name of the build](/docs/templates/builders.html#named-builds) that Packer is running.
- `PACKER_BUILD_NAME` is set to the [name of the
build](/docs/templates/builders.html#named-builds) that Packer is running.
This is most useful when Packer is making multiple builds and you want to
distinguish them slightly from a common provisioning script.
- `PACKER_BUILDER_TYPE` is the type of the builder that was used to create the
machine that the script is running on. This is useful if you want to run
only certain parts of the script on systems built with certain builders.
- `PACKER_BUILDER_TYPE` is the type of the builder that was used to create
the machine that the script is running on. This is useful if you want to
run only certain parts of the script on systems built with certain
builders.
- `PACKER_HTTP_ADDR` If using a builder that provides an http server for file
transfer (such as hyperv, parallels, qemu, virtualbox, and vmware), this

View File

@ -1,7 +1,7 @@
---
description: |
Within the template, the builders section contains an array of all the
builders that Packer should use to generate machine images for the template.
Within the template, the builders section contains an array of all the builders
that Packer should use to generate machine images for the template.
layout: docs
page_title: 'Builders - Templates'
sidebar_current: 'docs-templates-builders'
@ -13,12 +13,12 @@ Within the template, the builders section contains an array of all the builders
that Packer should use to generate machine images for the template.
Builders are responsible for creating machines and generating images from them
for various platforms. For example, there are separate builders for EC2, VMware,
VirtualBox, etc. Packer comes with many builders by default, and can also be
extended to add new builders.
for various platforms. For example, there are separate builders for EC2,
VMware, VirtualBox, etc. Packer comes with many builders by default, and can
also be extended to add new builders.
This documentation page will cover how to configure a builder in a template. The
specific configuration options available for each builder, however, must be
This documentation page will cover how to configure a builder in a template.
The specific configuration options available for each builder, however, must be
referenced from the documentation for that specific builder.
Within a template, a section of builder definitions looks like this:
@ -38,9 +38,9 @@ A single builder definition maps to exactly one
JSON object that requires at least a `type` key. The `type` is the name of the
builder that will be used to create a machine image for the build.
In addition to the `type`, other keys configure the builder itself. For example,
the AWS builder requires an `access_key`, `secret_key`, and some other settings.
These are placed directly within the builder definition.
In addition to the `type`, other keys configure the builder itself. For
example, the AWS builder requires an `access_key`, `secret_key`, and some other
settings. These are placed directly within the builder definition.
An example builder definition is shown below, in this case configuring the AWS
builder:

View File

@ -9,11 +9,12 @@ sidebar_current: 'docs-templates-communicators'
# Template Communicators
Communicators are the mechanism Packer uses to upload files, execute
scripts, etc. with the machine being created.
Communicators are the mechanism Packer uses to upload files, execute scripts,
etc. with the machine being created.
Communicators are configured within the [builder](/docs/templates/builders.html)
section. Packer currently supports three kinds of communicators:
Communicators are configured within the
[builder](/docs/templates/builders.html) section. Packer currently supports
three kinds of communicators:
- `none` - No communicator will be used. If this is set, most provisioners
also can't be used.
@ -23,18 +24,18 @@ section. Packer currently supports three kinds of communicators:
- `winrm` - A WinRM connection will be established.
In addition to the above, some builders have custom communicators they can
use. For example, the Docker builder has a "docker" communicator that uses
In addition to the above, some builders have custom communicators they can use.
For example, the Docker builder has a "docker" communicator that uses
`docker exec` and `docker cp` to execute scripts and copy files.
## Using a Communicator
By default, the SSH communicator is usually used. Additional configuration
may not even be necessary, since some builders such as Amazon automatically
By default, the SSH communicator is usually used. Additional configuration may
not even be necessary, since some builders such as Amazon automatically
configure everything.
However, to specify a communicator, you set the `communicator` key within
a build. Multiple builds can have different communicators. Example:
However, to specify a communicator, you set the `communicator` key within a
build. Multiple builds can have different communicators. Example:
``` json
{
@ -54,43 +55,45 @@ configuration parameters for that communicator. These are documented below.
The SSH communicator connects to the host via SSH. If you have an SSH agent
configured on the host running Packer, and SSH agent authentication is enabled
in the communicator config, Packer will automatically forward the SSH agent
to the remote host.
in the communicator config, Packer will automatically forward the SSH agent to
the remote host.
The SSH communicator has the following options:
- `ssh_agent_auth` (boolean) - If `true`, the local SSH agent will be used to
authenticate connections to the remote host. Defaults to `false`.
- `ssh_bastion_agent_auth` (boolean) - If `true`, the local SSH agent will
be used to authenticate with the bastion host. Defaults to `false`.
- `ssh_bastion_agent_auth` (boolean) - If `true`, the local SSH agent will be
used to authenticate with the bastion host. Defaults to `false`.
- `ssh_bastion_host` (string) - A bastion host to use for the actual
SSH connection.
- `ssh_bastion_host` (string) - A bastion host to use for the actual SSH
connection.
- `ssh_bastion_password` (string) - The password to use to authenticate
with the bastion host.
- `ssh_bastion_password` (string) - The password to use to authenticate with
the bastion host.
- `ssh_bastion_port` (number) - The port of the bastion host. Defaults to `22`.
- `ssh_bastion_port` (number) - The port of the bastion host. Defaults to
`22`.
- `ssh_bastion_private_key_file` (string) - A private key file to use
to authenticate with the bastion host.
- `ssh_bastion_private_key_file` (string) - A private key file to use to
authenticate with the bastion host.
- `ssh_bastion_username` (string) - The username to connect to the bastion
host.
- `ssh_clear_authorized_keys` (boolean) - If true, Packer will attempt to
remove its temporary key from `~/.ssh/authorized_keys` and
`/root/.ssh/authorized_keys`. This is a mostly cosmetic option, since Packer
will delete the temporary private key from the host system regardless of
whether this is set to true (unless the user has set the `-debug` flag).
Defaults to "false"; currently only works on guests with `sed` installed.
`/root/.ssh/authorized_keys`. This is a mostly cosmetic option, since
Packer will delete the temporary private key from the host system
regardless of whether this is set to true (unless the user has set the
`-debug` flag). Defaults to "false"; currently only works on guests with
`sed` installed.
- `ssh_disable_agent_forwarding` (boolean) - If true, SSH agent forwarding
will be disabled. Defaults to `false`.
- `ssh_file_transfer_method` (`scp` or `sftp`) - How to transfer files, Secure
copy (default) or SSH File Transfer Protocol.
- `ssh_file_transfer_method` (`scp` or `sftp`) - How to transfer files,
Secure copy (default) or SSH File Transfer Protocol.
- `ssh_handshake_attempts` (number) - The number of handshakes to attempt
with SSH once it can connect. This defaults to `10`.
@ -98,17 +101,17 @@ The SSH communicator has the following options:
- `ssh_host` (string) - The address to SSH to. This usually is automatically
configured by the builder.
* `ssh_keep_alive_interval` (string) - How often to send "keep alive"
- `ssh_keep_alive_interval` (string) - How often to send "keep alive"
messages to the server. Set to a negative value (`-1s`) to disable. Example
value: `10s`. Defaults to `5s`.
- `ssh_password` (string) - A plaintext password to use to authenticate
with SSH.
- `ssh_password` (string) - A plaintext password to use to authenticate with
SSH.
- `ssh_port` (number) - The port to connect to SSH. This defaults to `22`.
- `ssh_private_key_file` (string) - Path to a PEM encoded private key
file to use to authenticate with SSH.
- `ssh_private_key_file` (string) - Path to a PEM encoded private key file to
use to authenticate with SSH.
- `ssh_proxy_host` (string) - A SOCKS proxy host to use for SSH connection
@ -123,42 +126,42 @@ The SSH communicator has the following options:
- `ssh_pty` (boolean) - If `true`, a PTY will be requested for the SSH
connection. This defaults to `false`.
* `ssh_read_write_timeout` (string) - The amount of time to wait for a remote
command to end. This might be useful if, for example, packer hangs on
a connection after a reboot. Example: `5m`. Disabled by default.
- `ssh_read_write_timeout` (string) - The amount of time to wait for a remote
command to end. This might be useful if, for example, packer hangs on a
connection after a reboot. Example: `5m`. Disabled by default.
- `ssh_timeout` (string) - The time to wait for SSH to become available.
Packer uses this to determine when the machine has booted so this is
usually quite long. Example value: `10m`.
- `ssh_username` (string) - The username to connect to SSH with. Required
if using SSH.
- `ssh_username` (string) - The username to connect to SSH with. Required if
using SSH.
### SSH Communicator Details
Packer will only use one authentication method, either `publickey` or if
`ssh_password` is used packer will offer `password` and `keyboard-interactive`
both sending the password. In other words Packer will not work with _sshd_
both sending the password. In other words Packer will not work with *sshd*
configured with more than one configured authentication method using
`AuthenticationMethods`.
Packer supports the following ciphers:
- aes128-ctr
- aes192-ctr
- aes256-ctr
- arcfour128
- arcfour256
- arcfour
- es128-gcm@openssh.com
- acha20-poly1305@openssh.com
- aes128-ctr
- aes192-ctr
- aes256-ctr
- arcfour128
- arcfour256
- arcfour
- <es128-gcm@openssh.com>
- <acha20-poly1305@openssh.com>
And the following MACs:
- hmac-sha1
- hmac-sha1-96
- hmac-sha2-256
- hmac-sha2-256-etm@openssh.com
- hmac-sha1
- hmac-sha1-96
- hmac-sha2-256
- <hmac-sha2-256-etm@openssh.com>
## WinRM Communicator
@ -175,15 +178,15 @@ The WinRM communicator has the following options.
`5985` for plain unencrypted connection and `5986` for SSL when
`winrm_use_ssl` is set to true.
- `winrm_timeout` (string) - The amount of time to wait for WinRM to
become available. This defaults to `30m` since setting up a Windows
machine generally takes a long time.
- `winrm_timeout` (string) - The amount of time to wait for WinRM to become
available. This defaults to `30m` since setting up a Windows machine
generally takes a long time.
- `winrm_use_ntlm` (boolean) - If `true`, NTLMv2 authentication (with
session security) will be used for WinRM, rather than
default (basic authentication), removing the requirement for basic
authentication to be enabled within the target guest. Further reading
for remote connection authentication can be found
- `winrm_use_ntlm` (boolean) - If `true`, NTLMv2 authentication (with session
security) will be used for WinRM, rather than default (basic
authentication), removing the requirement for basic authentication to be
enabled within the target guest. Further reading for remote connection
authentication can be found
[here](https://msdn.microsoft.com/en-us/library/aa384295(v=vs.85).aspx).
- `winrm_use_ssl` (boolean) - If `true`, use HTTPS for WinRM.

View File

@ -10,36 +10,42 @@ sidebar_current: 'docs-templates-engine'
# Template Engine
All strings within templates are processed by a common Packer templating engine,
where variables and functions can be used to modify the value of a
All strings within templates are processed by a common Packer templating
engine, where variables and functions can be used to modify the value of a
configuration parameter at runtime.
The syntax of templates uses the following conventions:
- Anything template related happens within double-braces: `{{ }}`.
- Functions are specified directly within the braces, such as `{{timestamp}}`.
- Functions are specified directly within the braces, such as
`{{timestamp}}`.
- Template variables are prefixed with a period and capitalized, such as
`{{.Variable}}`.
## Functions
Functions perform operations on and within strings, for example the `{{timestamp}}` function can be used in any string to generate
the current timestamp. This is useful for configurations that require unique
keys, such as AMI names. By setting the AMI name to something like `My Packer AMI {{timestamp}}`, the AMI name will be unique down to the second. If you
need greater than one second granularity, you should use `{{uuid}}`, for
Functions perform operations on and within strings, for example the
`{{timestamp}}` function can be used in any string to generate the current
timestamp. This is useful for configurations that require unique keys, such as
AMI names. By setting the AMI name to something like
`My Packer AMI {{timestamp}}`, the AMI name will be unique down to the second.
If you need greater than one second granularity, you should use `{{uuid}}`, for
example when you have multiple builders in the same template.
Here is a full list of the available functions for reference.
- `build_name` - The name of the build being run.
- `build_type` - The type of the builder being used currently.
- `env` - Returns environment variables. See example in [using home variable](/docs/templates/user-variables.html#using-home-variable)
- `env` - Returns environment variables. See example in [using home
variable](/docs/templates/user-variables.html#using-home-variable)
- `isotime [FORMAT]` - UTC time, which can be
[formatted](https://golang.org/pkg/time/#example_Time_Format). See more
examples below in [the `isotime` format reference](/docs/templates/engine.html#isotime-function-format-reference).
examples below in [the `isotime` format
reference](/docs/templates/engine.html#isotime-function-format-reference).
- `lower` - Lowercases the string.
- `pwd` - The working directory while executing Packer.
- `split` - Split an input string using separator and return the requested substring.
- `split` - Split an input string using separator and return the requested
substring.
- `template_dir` - The directory to the template for the build.
- `timestamp` - The current Unix timestamp in UTC.
- `uuid` - Returns a random UUID.
@ -50,18 +56,16 @@ Here is a full list of the available functions for reference.
#### Specific to Amazon builders:
- `clean_ami_name` - AMI names can only contain certain characters. This
function will replace illegal characters with a '-" character. Example usage
since ":" is not a legal AMI name is: `{{isotime | clean_ami_name}}`.
function will replace illegal characters with a '-" character. Example
usage since ":" is not a legal AMI name is: `{{isotime | clean_ami_name}}`.
#### Specific to Google Compute builders:
- `clean_image_name` - GCE image names can only contain certain characters and
the maximum length is 63. This function will convert upper cases to lower cases
and replace illegal characters with a "-" character.
Example:
- `clean_image_name` - GCE image names can only contain certain characters
and the maximum length is 63. This function will convert upper cases to
lower cases and replace illegal characters with a "-" character. Example:
`"mybuild-{{isotime | clean_image_name}}"`
will become
`"mybuild-{{isotime | clean_image_name}}"` will become
`mybuild-2017-10-18t02-06-30z`.
Note: Valid GCE image names must match the regex
@ -70,16 +74,15 @@ Here is a full list of the available functions for reference.
This engine does not guarantee that the final image name will match the
regex; it will not truncate your name if it exceeds 63 characters, and it
will not validate that the beginning and end of the engine's output are
valid. For example,
`"image_name": {{isotime | clean_image_name}}"` will cause your build to
fail because the image name will start with a number, which is why in the
above example we prepend the isotime with "mybuild".
valid. For example, `"image_name": {{isotime | clean_image_name}}"` will
cause your build to fail because the image name will start with a number,
which is why in the above example we prepend the isotime with "mybuild".
#### Specific to Azure builders:
- `clean_image_name` - Azure managed image names can only contain
certain characters and the maximum length is 80. This function
will replace illegal characters with a "-" character. Example:
- `clean_image_name` - Azure managed image names can only contain certain
characters and the maximum length is 80. This function will replace illegal
characters with a "-" character. Example:
`"mybuild-{{isotime | clean_image_name}}"` will become
`mybuild-2017-10-18t02-06-30z`.
@ -87,17 +90,24 @@ Here is a full list of the available functions for reference.
Note: Valid Azure image names must match the regex
`^[^_\\W][\\w-._)]{0,79}$`
This engine does not guarantee that the final image name will
match the regex; it will not truncate your name if it exceeds 80
characters, and it will not validate that the beginning and end of
the engine's output are valid. It will truncate invalid
characters from the end of the name when converting illegal
characters. For example, `"managed_image_name: "My-Name::"` will
be converted to `"managed_image_name: "My-Name"`
This engine does not guarantee that the final image name will match the
regex; it will not truncate your name if it exceeds 80 characters, and it
will not validate that the beginning and end of the engine's output are
valid. It will truncate invalid characters from the end of the name when
converting illegal characters. For example,
`"managed_image_name: "My-Name::"` will be converted to
`"managed_image_name: "My-Name"`
## Template variables
Template variables are special variables automatically set by Packer at build time. Some builders, provisioners and other components have template variables that are available only for that component. Template variables are recognizable because they're prefixed by a period, such as `{{ .Name }}`. For example, when using the [`shell`](/docs/builders/vmware-iso.html) builder template variables are available to customize the [`execute_command`](/docs/provisioners/shell.html#execute_command) parameter used to determine how Packer will run the shell command.
Template variables are special variables automatically set by Packer at build
time. Some builders, provisioners and other components have template variables
that are available only for that component. Template variables are recognizable
because they're prefixed by a period, such as `{{ .Name }}`. For example, when
using the [`shell`](/docs/builders/vmware-iso.html) builder template variables
are available to customize the
[`execute_command`](/docs/provisioners/shell.html#execute_command) parameter
used to determine how Packer will run the shell command.
``` liquid
{
@ -113,9 +123,13 @@ Template variables are special variables automatically set by Packer at build ti
}
```
The `{{ .Vars }}` and `{{ .Path }}` template variables will be replaced with the list of the environment variables and the path to the script to be executed respectively.
The `{{ .Vars }}` and `{{ .Path }}` template variables will be replaced with
the list of the environment variables and the path to the script to be executed
respectively.
-&gt; **Note:** In addition to template variables, you can specify your own user variables. See the [user variable](/docs/templates/user-variables.html) documentation for more information on user variables.
-&gt; **Note:** In addition to template variables, you can specify your own
user variables. See the [user variable](/docs/templates/user-variables.html)
documentation for more information on user variables.
# isotime Function Format Reference
@ -214,7 +228,8 @@ Formatting for the function `isotime` uses the magic reference date **Mon Jan 2
</table>
*The values in parentheses are the abbreviated, or 24-hour clock values*
Note that "-0700" is always formatted into "+0000" because `isotime` is always UTC time.
Note that "-0700" is always formatted into "+0000" because `isotime` is always
UTC time.
Here are some example formatted time, using the above format options:
@ -227,7 +242,8 @@ isotime = June 7, 7:22:43pm 2014
{{isotime "Hour15Year200603"}} = Hour19Year201407
```
Please note that double quote characters need escaping inside of templates (in this case, on the `ami_name` value):
Please note that double quote characters need escaping inside of templates (in
this case, on the `ami_name` value):
``` json
{
@ -246,11 +262,14 @@ Please note that double quote characters need escaping inside of templates (in t
}
```
-&gt; **Note:** See the [Amazon builder](/docs/builders/amazon.html) documentation for more information on how to correctly configure the Amazon builder in this example.
-&gt; **Note:** See the [Amazon builder](/docs/builders/amazon.html)
documentation for more information on how to correctly configure the Amazon
builder in this example.
# split Function Format Reference
The function `split` takes an input string, a seperator string, and a numeric component value and returns the requested substring.
The function `split` takes an input string, a seperator string, and a numeric
component value and returns the requested substring.
Here are some examples using the above options:
@ -261,7 +280,8 @@ build_name = foo-bar-provider
{{split "fixed-string" "-" 1}} = string
```
Please note that double quote characters need escaping inside of templates (in this case, on the `fixed-string` value):
Please note that double quote characters need escaping inside of templates (in
this case, on the `fixed-string` value):
``` json
{

View File

@ -1,10 +1,10 @@
---
description: |
Templates are JSON files that configure the various components of Packer in
order to create one or more machine images. Templates are portable, static,
and readable and writable by both humans and computers. This has the added
benefit of being able to not only create and modify templates by hand, but
also write scripts to dynamically create or modify templates.
order to create one or more machine images. Templates are portable, static, and
readable and writable by both humans and computers. This has the added benefit
of being able to not only create and modify templates by hand, but also write
scripts to dynamically create or modify templates.
layout: docs
page_title: Templates
sidebar_current: 'docs-templates'
@ -30,8 +30,8 @@ Along with each key, it is noted whether it is required or not.
- `builders` (*required*) is an array of one or more objects that defines the
builders that will be used to create machine images for this template, and
configures each of those builders. For more information on how to define and
configure a builder, read the sub-section on [configuring builders in
configures each of those builders. For more information on how to define
and configure a builder, read the sub-section on [configuring builders in
templates](/docs/templates/builders.html).
- `description` (optional) is a string providing a description of what the
@ -44,24 +44,24 @@ Along with each key, it is noted whether it is required or not.
can't be specified because Packer retains backwards compatibility with
`packer fix`.
- `post-processors` (optional) is an array of one or more objects that defines
the various post-processing steps to take with the built images. If not
specified, then no post-processing will be done. For more information on
what post-processors do and how they're defined, read the sub-section on
- `post-processors` (optional) is an array of one or more objects that
defines the various post-processing steps to take with the built images. If
not specified, then no post-processing will be done. For more information
on what post-processors do and how they're defined, read the sub-section on
[configuring post-processors in
templates](/docs/templates/post-processors.html).
- `provisioners` (optional) is an array of one or more objects that defines
the provisioners that will be used to install and configure software for the
machines created by each of the builders. If it is not specified, then no
provisioners will be run. For more information on how to define and
the provisioners that will be used to install and configure software for
the machines created by each of the builders. If it is not specified, then
no provisioners will be run. For more information on how to define and
configure a provisioner, read the sub-section on [configuring provisioners
in templates](/docs/templates/provisioners.html).
- `variables` (optional) is an object of one or more key/value strings that
defines user variables contained in the template. If it is not specified,
then no variables are defined. For more information on how to define and use
user variables, read the sub-section on [user variables in
then no variables are defined. For more information on how to define and
use user variables, read the sub-section on [user variables in
templates](/docs/templates/user-variables.html).
## Comments
@ -84,9 +84,14 @@ builders, provisioners, etc. will still result in validation errors.
## Example Template
Below is an example of a basic template that could be invoked with `packer build`. It would create an instance in AWS, and once running copy a script to it and run that script using SSH.
Below is an example of a basic template that could be invoked with
`packer build`. It would create an instance in AWS, and once running copy a
script to it and run that script using SSH.
-&gt; **Note:** This example requires an account with Amazon Web Services. There are a number of parameters which need to be provided for a functional build to take place. See the [Amazon builder](/docs/builders/amazon.html) documentation for more information.
-&gt; **Note:** This example requires an account with Amazon Web Services.
There are a number of parameters which need to be provided for a functional
build to take place. See the [Amazon builder](/docs/builders/amazon.html)
documentation for more information.
``` json
{

View File

@ -10,9 +10,9 @@ sidebar_current: 'docs-templates-post-processors'
# Template Post-Processors
The post-processor section within a template configures any post-processing that
will be done to images built by the builders. Examples of post-processing would
be compressing files, uploading artifacts, etc.
The post-processor section within a template configures any post-processing
that will be done to images built by the builders. Examples of post-processing
would be compressing files, uploading artifacts, etc.
Post-processors are *optional*. If no post-processors are defined within a
template, then no post-processing will be done to the image. The resulting
@ -34,18 +34,19 @@ Within a template, a section of post-processor definitions looks like this:
```
For each post-processor definition, Packer will take the result of each of the
defined builders and send it through the post-processors. This means that if you
have one post-processor defined and two builders defined in a template, the
defined builders and send it through the post-processors. This means that if
you have one post-processor defined and two builders defined in a template, the
post-processor will run twice (once for each builder), by default. There are
ways, which will be covered later, to control what builders post-processors
apply to, if you wish.
## Post-Processor Definition
Within the `post-processors` array in a template, there are three ways to define
a post-processor. There are *simple* definitions, *detailed* definitions, and
*sequence* definitions. Another way to think about this is that the "simple" and
"detailed" definitions are shortcuts for the "sequence" definition.
Within the `post-processors` array in a template, there are three ways to
define a post-processor. There are *simple* definitions, *detailed*
definitions, and *sequence* definitions. Another way to think about this is
that the "simple" and "detailed" definitions are shortcuts for the "sequence"
definition.
A **simple definition** is just a string; the name of the post-processor. An
example is shown below. Simple definitions are used when no additional
@ -61,7 +62,8 @@ A **detailed definition** is a JSON object. It is very similar to a builder or
provisioner definition. It contains a `type` field to denote the type of the
post-processor, but may also contain additional configuration for the
post-processor. A detailed definition is used when additional configuration is
needed beyond simply the type for the post-processor. An example is shown below.
needed beyond simply the type for the post-processor. An example is shown
below.
``` json
{
@ -82,7 +84,8 @@ sequence definition. Sequence definitions are used to chain together multiple
post-processors. An example is shown below, where the artifact of a build is
compressed then uploaded, but the compressed result is not kept.
It is very important that any post processors that need to be run in order, be sequenced!
It is very important that any post processors that need to be run in order, be
sequenced!
``` json
{
@ -100,13 +103,13 @@ simply shortcuts for a **sequence** definition of only one element.
## Input Artifacts
When using post-processors, the input artifact (coming from a builder or another
post-processor) is discarded by default after the post-processor runs. This is
because generally, you don't want the intermediary artifacts on the way to the
final artifact created.
When using post-processors, the input artifact (coming from a builder or
another post-processor) is discarded by default after the post-processor runs.
This is because generally, you don't want the intermediary artifacts on the way
to the final artifact created.
In some cases, however, you may want to keep the intermediary artifacts. You can
tell Packer to keep these artifacts by setting the `keep_input_artifact`
In some cases, however, you may want to keep the intermediary artifacts. You
can tell Packer to keep these artifacts by setting the `keep_input_artifact`
configuration to `true`. An example is shown below:
``` json
@ -152,5 +155,5 @@ configurations. If you have a sequence of post-processors to run, `only` and
The values within `only` or `except` are *build names*, not builder types. If
you recall, build names by default are just their builder type, but if you
specify a custom `name` parameter, then you should use that as the value instead
of the type.
specify a custom `name` parameter, then you should use that as the value
instead of the type.

View File

@ -19,9 +19,10 @@ then no software other than the defaults will be installed within the resulting
machine images. This is not typical, however, since much of the value of Packer
is to produce multiple identical images of pre-configured software.
This documentation page will cover how to configure a provisioner in a template.
The specific configuration options available for each provisioner, however, must
be referenced from the documentation for that specific provisioner.
This documentation page will cover how to configure a provisioner in a
template. The specific configuration options available for each provisioner,
however, must be referenced from the documentation for that specific
provisioner.
Within a template, a section of provisioner definitions looks like this:
@ -41,11 +42,12 @@ within the template.
A provisioner definition is a JSON object that must contain at least the `type`
key. This key specifies the name of the provisioner to use. Additional keys
within the object are used to configure the provisioner, with the exception of a
handful of special keys, covered later.
within the object are used to configure the provisioner, with the exception of
a handful of special keys, covered later.
As an example, the "shell" provisioner requires a key such as `script` which
specifies a path to a shell script to execute within the machines being created.
specifies a path to a shell script to execute within the machines being
created.
An example provisioner definition is shown below, configuring the shell
provisioner to run a local script within the machines:
@ -59,9 +61,9 @@ provisioner to run a local script within the machines:
## Run on Specific Builds
You can use the `only` or `except` configurations to run a provisioner only with
specific builds. These two configurations do what you expect: `only` will only
run the provisioner on the specified builds and `except` will run the
You can use the `only` or `except` configurations to run a provisioner only
with specific builds. These two configurations do what you expect: `only` will
only run the provisioner on the specified builds and `except` will run the
provisioner on anything other than the specified builds.
An example of `only` being used is shown below, but the usage of `except` is
@ -77,23 +79,23 @@ effectively the same:
The values within `only` or `except` are *build names*, not builder types. If
you recall, build names by default are just their builder type, but if you
specify a custom `name` parameter, then you should use that as the value instead
of the type.
specify a custom `name` parameter, then you should use that as the value
instead of the type.
## Build-Specific Overrides
While the goal of Packer is to produce identical machine images, it sometimes
requires periods of time where the machines are different before they eventually
converge to be identical. In these cases, different configurations for
provisioners may be necessary depending on the build. This can be done using
build-specific overrides.
requires periods of time where the machines are different before they
eventually converge to be identical. In these cases, different configurations
for provisioners may be necessary depending on the build. This can be done
using build-specific overrides.
An example of where this might be necessary is when building both an EC2 AMI and
a VMware machine. The source EC2 AMI may setup a user with administrative
privileges by default, whereas the VMware machine doesn't have these privileges.
In this case, the shell script may need to be executed differently. Of course,
the goal is that hopefully the shell script converges these two images to be
identical. However, they may initially need to be run differently.
An example of where this might be necessary is when building both an EC2 AMI
and a VMware machine. The source EC2 AMI may setup a user with administrative
privileges by default, whereas the VMware machine doesn't have these
privileges. In this case, the shell script may need to be executed differently.
Of course, the goal is that hopefully the shell script converges these two
images to be identical. However, they may initially need to be run differently.
This example is shown below:
@ -111,9 +113,10 @@ This example is shown below:
As you can see, the `override` key is used. The value of this key is another
JSON object where the key is the name of a [builder
definition](/docs/templates/builders.html). The value of this is in turn another
JSON object. This JSON object simply contains the provisioner configuration as
normal. This configuration is merged into the default provisioner configuration.
definition](/docs/templates/builders.html). The value of this is in turn
another JSON object. This JSON object simply contains the provisioner
configuration as normal. This configuration is merged into the default
provisioner configuration.
## Pausing Before Running

View File

@ -12,27 +12,26 @@ sidebar_current: 'docs-templates-user-variables'
# Template User Variables
User variables allow your templates to be further configured with variables from
the command-line, environment variables, Vault, or files. This lets you parameterize
your templates so that you can keep secret tokens, environment-specific data,
and other types of information out of your templates. This maximizes the
portability of the template.
User variables allow your templates to be further configured with variables
from the command-line, environment variables, Vault, or files. This lets you
parameterize your templates so that you can keep secret tokens,
environment-specific data, and other types of information out of your
templates. This maximizes the portability of the template.
Using user variables expects you to know how [configuration
templates](/docs/templates/engine.html) work. If you don't know
how configuration templates work yet, please read that page first.
templates](/docs/templates/engine.html) work. If you don't know how
configuration templates work yet, please read that page first.
## Usage
User variables must first be defined in a `variables` section within
your template. Even if you want a user variable to default to an empty
string, it must be defined. This explicitness helps reduce the time it
takes for newcomers to understand what can be modified using variables
in your template.
User variables must first be defined in a `variables` section within your
template. Even if you want a user variable to default to an empty string, it
must be defined. This explicitness helps reduce the time it takes for newcomers
to understand what can be modified using variables in your template.
The `variables` section is a key/value mapping of the user variable name
to a default value. A default value can be the empty string. An example
is shown below:
The `variables` section is a key/value mapping of the user variable name to a
default value. A default value can be the empty string. An example is shown
below:
``` json
{
@ -50,14 +49,14 @@ is shown below:
}
```
In the above example, the template defines two user variables:
`aws_access_key` and `aws_secret_key`. They default to empty values.
Later, the variables are used within the builder we defined in order to
configure the actual keys for the Amazon builder.
In the above example, the template defines two user variables: `aws_access_key`
and `aws_secret_key`. They default to empty values. Later, the variables are
used within the builder we defined in order to configure the actual keys for
the Amazon builder.
If the default value is `null`, then the user variable will be
*required*. This means that the user must specify a value for this
variable or template validation will fail.
If the default value is `null`, then the user variable will be *required*. This
means that the user must specify a value for this variable or template
validation will fail.
User variables are used by calling the `{{user}}` function in the form of
<code>{{user \`variable\`}}</code>. This function can be used in *any value*
@ -72,7 +71,7 @@ The `env` function is available *only* within the default value of a user
variable, allowing you to default a user variable to an environment variable.
An example is shown below:
```json
``` json
{
"variables": {
"my_secret": "{{env `MY_SECRET`}}",
@ -86,9 +85,9 @@ variable (or an empty string if it does not exist).
-&gt; **Why can't I use environment variables elsewhere?** User variables are
the single source of configurable input to a template. We felt that having
environment variables used *anywhere* in a template would confuse the user
about the possible inputs to a template. By allowing environment variables
only within default values for user variables, user variables remain as the
single source of input to a template that a user can easily discover using
about the possible inputs to a template. By allowing environment variables only
within default values for user variables, user variables remain as the single
source of input to a template that a user can easily discover using
`packer inspect`.
-&gt; **Why can't I use `~` for home variable?** `~` is an special variable
@ -101,7 +100,7 @@ Consul keys can be used within your template using the `consul_key` function.
This function is available *only* within the default value of a user variable,
for reasons similar to environment variables above.
```json
``` json
{
"variables": {
"soft_versions": "{{ consul_key `my_image/softs_versions/next` }}"
@ -109,11 +108,12 @@ for reasons similar to environment variables above.
}
```
This will default `soft_versions` to the value of the key `my_image/softs_versions/next`
in consul.
This will default `soft_versions` to the value of the key
`my_image/softs_versions/next` in consul.
The configuration for consul (address, tokens, ...) must be specified as environment variables,
as specified in the [Documentation](https://www.consul.io/docs/commands/index.html#environment-variables).
The configuration for consul (address, tokens, ...) must be specified as
environment variables, as specified in the
[Documentation](https://www.consul.io/docs/commands/index.html#environment-variables).
## Vault Variables
@ -127,37 +127,33 @@ An example of using a v2 kv engine:
If you store a value in vault using `vault kv put secret/hello foo=world`, you
can access it using the following template engine:
```json
``` json
{
"variables": {
"my_secret": "{{ vault `/secret/data/hello` `foo`}}"
}
}
```
which will assign "my_secret": "world"
which will assign "my\_secret": "world"
An example of using a v1 kv engine:
If you store a value in vault using:
```
vault secrets enable -version=1 -path=secrets kv
vault kv put secrets/hello foo=world
```
vault secrets enable -version=1 -path=secrets kv
vault kv put secrets/hello foo=world
You can access it using the following template engine:
```
{
"variables": {
"VAULT_SECRETY_SECRET": "{{ vault `secrets/hello` `foo`}}"
}
}
```
{
"variables": {
"VAULT_SECRETY_SECRET": "{{ vault `secrets/hello` `foo`}}"
}
}
This example accesses the Vault path
`secret/data/foo` and returns the value stored at the key `bar`, storing it as
"my_secret".
This example accesses the Vault path `secret/data/foo` and returns the value
stored at the key `bar`, storing it as "my\_secret".
In order for this to work, you must set the environment variables `VAULT_TOKEN`
and `VAULT_ADDR` to valid values.
@ -170,7 +166,7 @@ too. For example, the `amazon-ebs` builder has a configuration parameter called
You can parameterize this by using a variable that is a list of regions, joined
by a `,`. For example:
```json
``` json
{
"variables": {
"destination_regions": "us-west-1,us-west-2"
@ -201,18 +197,17 @@ by a `,`. For example:
## Setting Variables
Now that we covered how to define and use user variables within a
template, the next important point is how to actually set these
variables. Packer exposes two methods for setting user variables: from
the command line or from a file.
Now that we covered how to define and use user variables within a template, the
next important point is how to actually set these variables. Packer exposes two
methods for setting user variables: from the command line or from a file.
### From the Command Line
To set user variables from the command line, the `-var` flag is used as
a parameter to `packer build` (and some other commands). Continuing our
example above, we could build our template using the command below. The
command is split across multiple lines for readability, but can of
course be a single line.
To set user variables from the command line, the `-var` flag is used as a
parameter to `packer build` (and some other commands). Continuing our example
above, we could build our template using the command below. The command is
split across multiple lines for readability, but can of course be a single
line.
``` text
$ packer build \
@ -222,19 +217,18 @@ $ packer build \
```
As you can see, the `-var` flag can be specified multiple times in order to set
multiple variables. Also, variables set later on the command-line override
any earlier set variable of the same name.
multiple variables. Also, variables set later on the command-line override any
earlier set variable of the same name.
**warning**
If you are calling Packer from cmd.exe, you should double-quote your variables
rather than single-quoting them. For example:
**warning** If you are calling Packer from cmd.exe, you should double-quote
your variables rather than single-quoting them. For example:
`packer build -var "aws_secret_key=foo" template.json`
### From a File
Variables can also be set from an external JSON file. The `-var-file` flag reads
a file containing a key/value mapping of variables to values and sets
Variables can also be set from an external JSON file. The `-var-file` flag
reads a file containing a key/value mapping of variables to values and sets
those variables. An example JSON file may look like this:
``` json
@ -255,14 +249,13 @@ On Windows :
packer build -var-file variables.json template.json
```
The `-var-file` flag can be specified multiple times and variables from multiple
files will be read and applied. As you'd expect, variables read from files
specified later override a variable set earlier.
The `-var-file` flag can be specified multiple times and variables from
multiple files will be read and applied. As you'd expect, variables read from
files specified later override a variable set earlier.
Combining the `-var` and `-var-file` flags together also works how you'd
expect. Variables set later in the command override variables set
earlier. So, for example, in the following command with the above
`variables.json` file:
expect. Variables set later in the command override variables set earlier. So,
for example, in the following command with the above `variables.json` file:
``` text
$ packer build \
@ -301,7 +294,7 @@ sensitive variables won't get printed to the logs by adding them to the
The above snippet of code will function exactly the same as if you did not set
"sensitive-variables", except that the Packer UI and logs will replace all
instances of "bar" and of whatever the value of "my_secret" is with
instances of "bar" and of whatever the value of "my\_secret" is with
`<sensitive>`. This allows you to be confident that you are not printing
secrets in plaintext to our logs by accident.
@ -309,11 +302,11 @@ secrets in plaintext to our logs by accident.
## Making a provisioner step conditional on the value of a variable
There is no specific syntax in Packer templates for making a provisioner
step conditional, depending on the value of a variable. However, you may
be able to do this by referencing the variable within a command that
you execute. For example, here is how to make a `shell-local`
provisioner only run if the `do_nexpose_scan` variable is non-empty.
There is no specific syntax in Packer templates for making a provisioner step
conditional, depending on the value of a variable. However, you may be able to
do this by referencing the variable within a command that you execute. For
example, here is how to make a `shell-local` provisioner only run if the
`do_nexpose_scan` variable is non-empty.
``` json
{