run docs through pandoc

This commit is contained in:
Matthew Hooker 2017-06-14 18:04:16 -07:00
parent 3a579bea81
commit bcc0d24bf4
No known key found for this signature in database
GPG Key ID: 7B5F933D9CE8C6A1
92 changed files with 2940 additions and 2989 deletions

View File

@ -60,6 +60,9 @@ fmt: ## Format Go code
fmt-check: ## Check go code formatting
$(CURDIR)/scripts/gofmtcheck.sh $(GOFMT_FILES)
fmt-docs:
@find ./website/source/docs -name "*.md" -exec pandoc --wrap auto --columns 79 --atx-headers -s -f "markdown_github+yaml_metadata_block" -t "markdown_github+yaml_metadata_block" {} -o {} \;
# Install js-beautify with npm install -g js-beautify
fmt-examples:
find examples -name *.json | xargs js-beautify -r -s 2 -n -eol "\n"
@ -91,4 +94,4 @@ updatedeps:
help:
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
.PHONY: bin checkversion ci default deps fmt fmt-examples generate releasebin test testacc testrace updatedeps
.PHONY: bin checkversion ci default deps fmt fmt-docs fmt-examples generate releasebin test testacc testrace updatedeps

View File

@ -1,12 +1,12 @@
---
layout: docs
page_title: Terminology
description: |-
description: |
There are a handful of terms used throughout the Packer documentation where
the meaning may not be immediately obvious if you haven't used Packer before.
Luckily, there are relatively few. This page documents all the terminology
required to understand and use Packer. The terminology is in alphabetical
order for quick referencing.
layout: docs
page_title: Terminology
---
# Packer Terminology

View File

@ -4,7 +4,7 @@ description: |
customized images based on an existing base images.
layout: docs
page_title: Alicloud Image Builder
...
---
# Alicloud Image Builder
@ -39,7 +39,7 @@ builder.
Table](https://intl.aliyun.com/help/doc-detail/25620.htm?spm=a3c0i.o25499en.a3.6.Dr1bik)
interface.
- `image_name` (string) - The name of the user-defined image, [2, 128] English
- `image_name` (string) - The name of the user-defined image, \[2, 128\] English
or Chinese characters. It must begin with an uppercase/lowercase letter or
a Chinese character, and may contain numbers, `_` or `-`. It cannot begin with
`http://` or `https://`.
@ -47,8 +47,6 @@ builder.
- `source_image` (string) - This is the base image id which you want to create
your customized images.
### Optional:
- `skip_region_validation` (bool) - The region validation can be skipped if this
@ -56,7 +54,7 @@ builder.
- `image_description` (string) - The description of the image, with a length
limit of 0 to 256 characters. Leaving it blank means null, which is the
default value. It cannot begin with http:// or https://.
default value. It cannot begin with `http://` or `https://`.
- `image_version` (string) - The version number of the image, with a length limit
of 1 to 40 English characters.
@ -67,8 +65,8 @@ builder.
- `image_copy_regions` (array of string) - Copy to the destination regionIds.
- `image_copy_names` (array of string) - The name of the destination image, [2,
128] English or Chinese characters. It must begin with an uppercase/lowercase
- `image_copy_names` (array of string) - The name of the destination image, \[2,
128\] English or Chinese characters. It must begin with an uppercase/lowercase
letter or a Chinese character, and may contain numbers, `_` or `-`. It cannot
begin with `http://` or `https://`.
@ -81,22 +79,22 @@ builder.
duplicated existing image, the source snapshot of this image will be delete
either.
- `disk_name` (string) - The value of disk name is blank by default. [2, 128]
- `disk_name` (string) - The value of disk name is blank by default. \[2, 128\]
English or Chinese characters, must begin with an uppercase/lowercase letter
or Chinese character. Can contain numbers, `.`, `_` and `-`. The disk name
will appear on the console. It cannot begin with http:// or https://.
will appear on the console. It cannot begin with `http://` or `https://`.
- `disk_category` (string) - Category of the data disk. Optional values are:
- cloud - general cloud disk
- cloud_efficiency - efficiency cloud disk
- cloud_ssd - cloud SSD
- cloud\_efficiency - efficiency cloud disk
- cloud\_ssd - cloud SSD
Default value: cloud.
- `disk_size` (int) - Size of the system disk, in GB, values range:
- cloud - 5 ~ 2000
- cloud_efficiency - 20 ~ 2048
- cloud_ssd - 20 ~ 2048
- cloud\_efficiency - 20 ~ 2048
- cloud\_ssd - 20 ~ 2048
The value should be equal to or greater than the size of the specific SnapshotId.
@ -106,14 +104,14 @@ builder.
Snapshots from on or before July 15, 2013 cannot be used to create a disk.
- `disk_description` (string) - The value of disk description is blank by default. [2, 256] characters. The disk description will appear on the console. It cannot begin with http:// or https://.
- `disk_description` (string) - The value of disk description is blank by default. \[2, 256\] characters. The disk description will appear on the console. It cannot begin with `http://` or `https://`.
- `disk_delete_with_instance` (string) - Whether or not the disk is released along with the instance:
- True indicates that when the instance is released, this disk will be released with it
- False indicates that when the instance is released, this disk will be retained.
- True indicates that when the instance is released, this disk will be released with it
- False indicates that when the instance is released, this disk will be retained.
- `disk_device` (string) - Device information of the related instance: such as
`/dev/xvdb` It is null unless the Status is In_use.
`/dev/xvdb` It is null unless the Status is In\_use.
- `zone_id` (string) - ID of the zone to which the disk belongs.
@ -137,7 +135,7 @@ builder.
be created automatically.
- `security_group_name` (string) - The security group name. The default value is
blank. [2, 128] English or Chinese characters, must begin with an
blank. \[2, 128\] English or Chinese characters, must begin with an
uppercase/lowercase letter or Chinese character. Can contain numbers, `.`,
`_` or `-`. It cannot begin with `http://` or `https://`.
@ -148,7 +146,7 @@ builder.
- `vpc_id` (string) - VPC ID allocated by the system.
- `vpc_name` (string) - The VPC name. The default value is blank. [2, 128]
- `vpc_name` (string) - The VPC name. The default value is blank. \[2, 128\]
English or Chinese characters, must begin with an uppercase/lowercase letter
or Chinese character. Can contain numbers, `_` and `-`. The disk description
will appear on the console. Cannot begin with `http://` or `https://`.
@ -163,7 +161,7 @@ builder.
uppercase/lowercase letter or a Chinese character and can contain numerals,
`.`, `_`, or `-`. The instance name is displayed on the Alibaba Cloud
console. If this parameter is not specified, the default value is InstanceId
of the instance. It cannot begin with http:// or https://.
of the instance. It cannot begin with `http://` or `https://`.
- `internet_charge_type` (string) - Internet charge type, which can be
`PayByTraffic` or `PayByBandwidth`. Optional values:
@ -172,24 +170,22 @@ builder.
If this parameter is not specified, the default value is `PayByBandwidth`.
- `internet_max_bandwidth_out` (string) - Maximum outgoing bandwidth to the public
network, measured in Mbps (Mega bit per second).
Value range:
- PayByBandwidth: [0, 100]. If this parameter is not specified, API automatically sets it to 0 Mbps.
- PayByTraffic: [1, 100]. If this parameter is not specified, an error is returned.
- PayByBandwidth: \[0, 100\]. If this parameter is not specified, API automatically sets it to 0 Mbps.
- PayByTraffic: \[1, 100\]. If this parameter is not specified, an error is returned.
- `temporary_key_pair_name` (string) - The name of the temporary key pair to
generate. By default, Packer generates a name that looks like `packer_<UUID>`,
where `<UUID>` is a 36 character unique identifier.
## Basic Example
Here is a basic example for Alicloud.
```json
``` json
{
"variables": {
"access_key": "{{env `ALICLOUD_ACCESS_KEY`}}",
@ -217,7 +213,6 @@ Here is a basic example for Alicloud.
}
```
See the
[examples/alicloud](https://github.com/hashicorp/packer/tree/master/examples/alicloud)
folder in the packer project for more examples.

View File

@ -1,12 +1,12 @@
---
layout: docs
sidebar_current: docs-builders-amazon-chroot
page_title: Amazon chroot - Builders
description: |-
description: |
The amazon-chroot Packer builder is able to create Amazon AMIs backed by an
EBS volume as the root device. For more information on the difference between
instance storage and EBS-backed instances, storage for the root device section
in the EC2 documentation.
layout: docs
page_title: 'Amazon chroot - Builders'
sidebar_current: 'docs-builders-amazon-chroot'
---
# AMI Builder (chroot)
@ -24,7 +24,7 @@ builder is able to build an EBS-backed AMI without launching a new EC2 instance.
This can dramatically speed up AMI builds for organizations who need the extra
fast build.
~> **This is an advanced builder** If you're just getting started with
~&gt; **This is an advanced builder** If you're just getting started with
Packer, we recommend starting with the [amazon-ebs
builder](/docs/builders/amazon-ebs.html), which is much easier to use.
@ -122,7 +122,7 @@ each category, the available configuration keys are alphabetized.
- `custom_endpoint_ec2` (string) - this option is useful if you use
another cloud provider that provide a compatible API with aws EC2,
specify another endpoint like this "https://ec2.another.endpoint..com"
specify another endpoint like this "<https://ec2.another.endpoint>..com"
- `device_path` (string) - The path to the device where the root volume of the
source AMI will be attached. This defaults to "" (empty string), which
@ -132,8 +132,7 @@ each category, the available configuration keys are alphabetized.
networking (SriovNetSupport and ENA) on HVM-compatible AMIs. If true, add
`ec2:ModifyInstanceAttribute` to your AWS IAM policy. Note: you must make
sure enhanced networking is enabled on your instance. See [Amazon's
documentation on enabling enhanced networking](
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking)
documentation on enabling enhanced networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking)
- `force_deregister` (boolean) - Force Packer to first deregister an existing
AMI if one with the same name already exists. Default `false`.
@ -266,7 +265,7 @@ each category, the available configuration keys are alphabetized.
- `source_ami_filter` (object) - Filters used to populate the `source_ami` field.
Example:
```json
``` json
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
@ -303,7 +302,7 @@ each category, the available configuration keys are alphabetized.
Here is a basic example. It is completely valid except for the access keys:
```json
``` json
{
"type": "amazon-chroot",
"access_key": "YOUR KEY HERE",
@ -330,7 +329,7 @@ However, if you want to change or add the mount points, you may using the
`chroot_mounts` configuration. Here is an example configuration which only
mounts `/prod` and `/dev`:
```json
``` json
{
"chroot_mounts": [
["proc", "proc", "/proc"],
@ -370,7 +369,7 @@ For debian based distributions you can setup a
file which will prevent packages installed by your provisioners from starting
services:
```json
``` json
{
"type": "shell",
"inline": [
@ -398,7 +397,7 @@ The device setup commands partition the device with one partition for use as an
HVM image and format it ext4. This builder block should be followed by
provisioning commands to install the os and bootloader.
```json
``` json
{
"type": "amazon-chroot",
"ami_name": "packer-from-scratch {{timestamp}}",

View File

@ -1,12 +1,12 @@
---
layout: docs
sidebar_current: docs-builders-amazon-ebsbacked
page_title: Amazon EBS - Builders
description: |-
description: |
The amazon-ebs Packer builder is able to create Amazon AMIs backed by EBS
volumes for use in EC2. For more information on the difference between
EBS-backed instances and instance-store backed instances, see the storage for
the root device section in the EC2 documentation.
layout: docs
page_title: 'Amazon EBS - Builders'
sidebar_current: 'docs-builders-amazon-ebsbacked'
---
# AMI Builder (EBS backed)
@ -29,7 +29,7 @@ bit.
The builder does *not* manage AMIs. Once it creates an AMI and stores it in your
account, it is up to you to use, delete, etc. the AMI.
-> **Note:** Temporary resources are, by default, all created with the prefix
-&gt; **Note:** Temporary resources are, by default, all created with the prefix
`packer`. This can be useful if you want to restrict the security groups and
key pairs Packer is able to operate on.
@ -76,36 +76,36 @@ builder.
on the type of VM you use. The block device mappings allow for the following
configuration:
- `delete_on_termination` (boolean) - Indicates whether the EBS volume is
- `delete_on_termination` (boolean) - Indicates whether the EBS volume is
deleted on instance termination. Default `false`. **NOTE**: If this
value is not explicitly set to `true` and volumes are not cleaned up by
an alternative method, additional volumes will accumulate after
every build.
- `device_name` (string) - The device name exposed to the instance (for
- `device_name` (string) - The device name exposed to the instance (for
example, `/dev/sdh` or `xvdh`). Required when specifying `volume_size`.
- `encrypted` (boolean) - Indicates whether to encrypt the volume or not
- `encrypted` (boolean) - Indicates whether to encrypt the volume or not
- `iops` (integer) - The number of I/O operations per second (IOPS) that the
- `iops` (integer) - The number of I/O operations per second (IOPS) that the
volume supports. See the documentation on
[IOPs](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html)
for more information
- `no_device` (boolean) - Suppresses the specified device included in the
- `no_device` (boolean) - Suppresses the specified device included in the
block device mapping of the AMI
- `snapshot_id` (string) - The ID of the snapshot
- `snapshot_id` (string) - The ID of the snapshot
- `virtual_name` (string) - The virtual device name. See the documentation on
- `virtual_name` (string) - The virtual device name. See the documentation on
[Block Device
Mapping](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html)
for more information
- `volume_size` (integer) - The size of the volume, in GiB. Required if not
- `volume_size` (integer) - The size of the volume, in GiB. Required if not
specifying a `snapshot_id`
- `volume_type` (string) - The volume type. `gp2` for General Purpose (SSD)
- `volume_type` (string) - The volume type. `gp2` for General Purpose (SSD)
volumes, `io1` for Provisioned IOPS (SSD) volumes, and `standard` for Magnetic
volumes
@ -145,7 +145,7 @@ builder.
- `custom_endpoint_ec2` (string) - this option is useful if you use
another cloud provider that provide a compatible API with aws EC2,
specify another endpoint like this "https://ec2.another.endpoint..com"
specify another endpoint like this "<https://ec2.another.endpoint>..com"
- `disable_stop_instance` (boolean) - Packer normally stops the build instance
after all provisioners have run. For Windows instances, it is sometimes
@ -154,7 +154,7 @@ builder.
stop the instance and will wait for you to stop it manually. You can do this
with a [windows-shell provisioner](https://www.packer.io/docs/provisioners/windows-shell.html).
```json
``` json
{
"type": "windows-shell",
"inline": ["\"c:\\Program Files\\Amazon\\Ec2ConfigService\\ec2config.exe\" -sysprep"]
@ -169,8 +169,7 @@ builder.
networking (SriovNetSupport and ENA) on HVM-compatible AMIs. If true, add
`ec2:ModifyInstanceAttribute` to your AWS IAM policy. Note: you must make
sure enhanced networking is enabled on your instance. See [Amazon's
documentation on enabling enhanced networking](
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking)
documentation on enabling enhanced networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking)
- `force_deregister` (boolean) - Force Packer to first deregister an existing
AMI if one with the same name already exists. Default `false`.
@ -260,7 +259,7 @@ builder.
- `source_ami_filter` (object) - Filters used to populate the `source_ami` field.
Example:
```json
``` json
{
"source_ami_filter": {
"filters": {
@ -333,7 +332,7 @@ builder.
- `temporary_key_pair_name` (string) - The name of the temporary key pair
to generate. By default, Packer generates a name that looks like
`packer_<UUID>`, where \<UUID\> is a 36 character unique identifier.
`packer_<UUID>`, where &lt;UUID&gt; is a 36 character unique identifier.
- `token` (string) - The access token to use. This is different from the
access key and secret key. If you're not sure what this is, then you
@ -360,7 +359,7 @@ builder.
Here is a basic example. You will need to provide access keys, and may need to
change the AMI IDs according to what images exist at the time the template is run:
```json
``` json
{
"type": "amazon-ebs",
"access_key": "YOUR KEY HERE",
@ -373,7 +372,7 @@ change the AMI IDs according to what images exist at the time the template is ru
}
```
-> **Note:** Packer can also read the access key and secret access key from
-&gt; **Note:** Packer can also read the access key and secret access key from
environmental variables. See the configuration reference in the section above
for more information on what environmental variables Packer will look for.
@ -397,7 +396,7 @@ configuration of `launch_block_device_mappings` will expand the root volume
`ami_block_device_mappings` AWS will attach additional volumes `/dev/sdb` and
`/dev/sdc` when we boot a new instance of our AMI.
```json
``` json
{
"type": "amazon-ebs",
"access_key": "YOUR KEY HERE",
@ -435,7 +434,7 @@ Here is an example using the optional AMI tags. This will add the tags
provide your access keys, and may need to change the source AMI ID based on what
images exist when this template is run:
```json
``` json
{
"type": "amazon-ebs",
"access_key": "YOUR KEY HERE",
@ -452,7 +451,7 @@ images exist when this template is run:
}
```
-> **Note:** Packer uses pre-built AMIs as the source for building images.
-&gt; **Note:** Packer uses pre-built AMIs as the source for building images.
These source AMIs may include volumes that are not flagged to be destroyed on
termination of the instance building the new image. Packer will attempt to clean
up all residual volumes that are not designated by the user to remain after

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-builders-amazon-ebssurrogate
page_title: Amazon EBS Surrogate - Builders
description: |-
description: |
The amazon-ebssurrogate Packer builder is like the chroot builder, but does
not require running inside an EC2 instance.
layout: docs
page_title: 'Amazon EBS Surrogate - Builders'
sidebar_current: 'docs-builders-amazon-ebssurrogate'
---
# EBS Surrogate Builder
@ -138,7 +138,7 @@ builder.
- `custom_endpoint_ec2` (string) - this option is useful if you use
another cloud provider that provide a compatible API with aws EC2,
specify another endpoint like this "https://ec2.another.endpoint..com"
specify another endpoint like this "<https://ec2.another.endpoint>..com"
- `disable_stop_instance` (boolean) - Packer normally stops the build instance
after all provisioners have run. For Windows instances, it is sometimes
@ -147,7 +147,7 @@ builder.
stop the instance and will wait for you to stop it manually. You can do this
with a [windows-shell provisioner](https://www.packer.io/docs/provisioners/windows-shell.html).
```json
``` json
{
"type": "windows-shell",
"inline": ["\"c:\\Program Files\\Amazon\\Ec2ConfigService\\ec2config.exe\" -sysprep"]
@ -162,8 +162,7 @@ builder.
networking (SriovNetSupport and ENA) on HVM-compatible AMIs. If true, add
`ec2:ModifyInstanceAttribute` to your AWS IAM policy. Note: you must make
sure enhanced networking is enabled on your instance. See [Amazon's
documentation on enabling enhanced networking](
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking)
documentation on enabling enhanced networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking)
- `force_deregister` (boolean) - Force Packer to first deregister an existing
AMI if one with the same name already exists. Default `false`.
@ -253,7 +252,7 @@ builder.
- `source_ami_filter` (object) - Filters used to populate the `source_ami` field.
Example:
```json
``` json
{
"source_ami_filter": {
"filters": {
@ -349,7 +348,7 @@ builder.
## Basic Example
```json
``` json
{
"type" : "amazon-ebssurrogate",
"secret_key" : "YOUR SECRET KEY HERE",
@ -376,7 +375,7 @@ builder.
}
```
-> **Note:** Packer can also read the access key and secret access key from
-&gt; **Note:** Packer can also read the access key and secret access key from
environmental variables. See the configuration reference in the section above
for more information on what environmental variables Packer will look for.
@ -392,7 +391,7 @@ with the `-debug` flag. In debug mode, the Amazon builder will save the private
key in the current directory and will output the DNS or IP information as well.
You can use this information to access the instance as it is running.
-> **Note:** Packer uses pre-built AMIs as the source for building images.
-&gt; **Note:** Packer uses pre-built AMIs as the source for building images.
These source AMIs may include volumes that are not flagged to be destroyed on
termination of the instance building the new image. In addition to those volumes
created by this builder, any volumes inn the source AMI which are not marked for

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-builders-amazon-ebsvolume
page_title: Amazon EBS Volume - Builders
description: |-
description: |
The amazon-ebsvolume Packer builder is like the EBS builder, but is intended
to create EBS volumes rather than a machine image.
layout: docs
page_title: 'Amazon EBS Volume - Builders'
sidebar_current: 'docs-builders-amazon-ebsvolume'
---
# EBS Volume Builder
@ -25,7 +25,7 @@ instance while the image is being created.
The builder does *not* manage EBS Volumes. Once it creates volumes and stores it
in your account, it is up to you to use, delete, etc. the volumes.
-> **Note:** Temporary resources are, by default, all created with the prefix
-&gt; **Note:** Temporary resources are, by default, all created with the prefix
`packer`. This can be useful if you want to restrict the security groups and
key pairs Packer is able to operate on.
@ -84,7 +84,7 @@ builder.
volumes, `io1` for Provisioned IOPS (SSD) volumes, and `standard` for Magnetic
volumes
- `tags` (map) - Tags to apply to the volume. These are retained after the
builder completes. This is a [template engine]
builder completes. This is a \[template engine\]
(/docs/templates/engine.html) where the `SourceAMI`
variable is replaced with the source AMI ID and `BuildRegion` variable
is replaced with the value of `region`.
@ -98,7 +98,7 @@ builder.
- `custom_endpoint_ec2` (string) - this option is useful if you use
another cloud provider that provide a compatible API with aws EC2,
specify another endpoint like this "https://ec2.another.endpoint..com"
specify another endpoint like this "<https://ec2.another.endpoint>..com"
- `ebs_optimized` (boolean) - Mark instance as [EBS
Optimized](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html).
@ -108,8 +108,7 @@ builder.
networking (SriovNetSupport and ENA) on HVM-compatible AMIs. If true, add
`ec2:ModifyInstanceAttribute` to your AWS IAM policy. Note: you must make
sure enhanced networking is enabled on your instance. See [Amazon's
documentation on enabling enhanced networking](
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking)
documentation on enabling enhanced networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking)
- `iam_instance_profile` (string) - The name of an [IAM instance
profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html)
@ -167,7 +166,7 @@ builder.
- `source_ami_filter` (object) - Filters used to populate the `source_ami` field.
Example:
```json
``` json
{
"source_ami_filter": {
"filters": {
@ -225,7 +224,7 @@ builder.
- `temporary_key_pair_name` (string) - The name of the temporary key pair
to generate. By default, Packer generates a name that looks like
`packer_<UUID>`, where \<UUID\> is a 36 character unique identifier.
`packer_<UUID>`, where &lt;UUID&gt; is a 36 character unique identifier.
- `token` (string) - The access token to use. This is different from the
access key and secret key. If you're not sure what this is, then you
@ -247,10 +246,9 @@ builder.
- `windows_password_timeout` (string) - The timeout for waiting for a Windows
password for Windows instances. Defaults to 20 minutes. Example value: `10m`
## Basic Example
```json
``` json
{
"type" : "amazon-ebsvolume",
"secret_key" : "YOUR SECRET KEY HERE",
@ -294,7 +292,7 @@ builder.
}
```
-> **Note:** Packer can also read the access key and secret access key from
-&gt; **Note:** Packer can also read the access key and secret access key from
environmental variables. See the configuration reference in the section above
for more information on what environmental variables Packer will look for.
@ -310,7 +308,7 @@ with the `-debug` flag. In debug mode, the Amazon builder will save the private
key in the current directory and will output the DNS or IP information as well.
You can use this information to access the instance as it is running.
-> **Note:** Packer uses pre-built AMIs as the source for building images.
-&gt; **Note:** Packer uses pre-built AMIs as the source for building images.
These source AMIs may include volumes that are not flagged to be destroyed on
termination of the instance building the new image. In addition to those volumes
created by this builder, any volumes inn the source AMI which are not marked for

View File

@ -1,12 +1,12 @@
---
layout: docs
sidebar_current: docs-builders-amazon-instance
page_title: Amazon instance-store - Builders
description: |-
description: |
The amazon-instance Packer builder is able to create Amazon AMIs backed by
instance storage as the root device. For more information on the difference
between instance storage and EBS-backed instances, see the storage for the
root device section in the EC2 documentation.
layout: docs
page_title: 'Amazon instance-store - Builders'
sidebar_current: 'docs-builders-amazon-instance'
---
# AMI Builder (instance-store)
@ -29,16 +29,16 @@ created. This simplifies configuration quite a bit.
The builder does *not* manage AMIs. Once it creates an AMI and stores it in
your account, it is up to you to use, delete, etc. the AMI.
-> **Note:** Temporary resources are, by default, all created with the prefix
-&gt; **Note:** Temporary resources are, by default, all created with the prefix
`packer`. This can be useful if you want to restrict the security groups and
key pairs packer is able to operate on.
-> **Note:** This builder requires that the [Amazon EC2 AMI
-&gt; **Note:** This builder requires that the [Amazon EC2 AMI
Tools](https://aws.amazon.com/developertools/368) are installed onto the
machine. This can be done within a provisioner, but must be done before the
builder finishes running.
~> Instance builds are not supported for Windows. Use [`amazon-ebs`](amazon-ebs.html) instead.
~&gt; Instance builds are not supported for Windows. Use [`amazon-ebs`](amazon-ebs.html) instead.
## Configuration Reference
@ -183,7 +183,7 @@ builder.
- `custom_endpoint_ec2` (string) - this option is useful if you use
another cloud provider that provide a compatible API with aws EC2,
specify another endpoint like this "https://ec2.another.endpoint..com"
specify another endpoint like this "<https://ec2.another.endpoint>..com"
- `ebs_optimized` (boolean) - Mark instance as [EBS
Optimized](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html).
@ -193,8 +193,7 @@ builder.
networking (SriovNetSupport and ENA) on HVM-compatible AMIs. If true, add
`ec2:ModifyInstanceAttribute` to your AWS IAM policy. Note: you must make
sure enhanced networking is enabled on your instance. See [Amazon's
documentation on enabling enhanced networking](
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking)
documentation on enabling enhanced networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking)
- `force_deregister` (boolean) - Force Packer to first deregister an existing
AMI if one with the same name already exists. Defaults to `false`.
@ -259,7 +258,7 @@ builder.
- `source_ami_filter` (object) - Filters used to populate the `source_ami` field.
Example:
```json
``` json
{
"source_ami_filter": {
"filters": {
@ -334,7 +333,7 @@ builder.
- `temporary_key_pair_name` (string) - The name of the temporary key pair
to generate. By default, Packer generates a name that looks like
`packer_<UUID>`, where \<UUID\> is a 36 character unique identifier.
`packer_<UUID>`, where &lt;UUID&gt; is a 36 character unique identifier.
- `user_data` (string) - User data to apply when launching the instance. Note
that you need to be careful about escaping characters due to the templates
@ -361,7 +360,7 @@ builder.
Here is a basic example. It is completely valid except for the access keys:
```json
``` json
{
"type": "amazon-instance",
"access_key": "YOUR KEY HERE",
@ -381,7 +380,7 @@ Here is a basic example. It is completely valid except for the access keys:
}
```
-> **Note:** Packer can also read the access key and secret access key from
-&gt; **Note:** Packer can also read the access key and secret access key from
environmental variables. See the configuration reference in the section above
for more information on what environmental variables Packer will look for.
@ -416,7 +415,7 @@ multiple lines for convenience of reading. The bundle volume command is
responsible for executing `ec2-bundle-vol` in order to store and image of the
root filesystem to use to create the AMI.
```text
``` text
sudo -i -n ec2-bundle-vol \
-k {{.KeyPath}} \
-u {{.AccountId}} \
@ -432,7 +431,7 @@ sudo -i -n ec2-bundle-vol \
The available template variables should be self-explanatory based on the
parameters they're used to satisfy the `ec2-bundle-vol` command.
~> **Warning!** Some versions of ec2-bundle-vol silently ignore all .pem and
~&gt; **Warning!** Some versions of ec2-bundle-vol silently ignore all .pem and
.gpg files during the bundling of the AMI, which can cause problems on some
systems, such as Ubuntu. You may want to customize the bundle volume command to
include those files (see the `--no-filter` option of `ec2-bundle-vol`).
@ -444,7 +443,7 @@ multiple lines for convenience of reading. Access key and secret key are omitted
if using instance profile. The bundle upload command is responsible for taking
the bundled volume and uploading it to S3.
```text
``` text
sudo -i -n ec2-upload-bundle \
-b {{.BucketName}} \
-m {{.ManifestPath}} \

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-builders-amazon
page_title: Amazon AMI - Builders
description: |-
description: |
Packer is able to create Amazon AMIs. To achieve this, Packer comes with
multiple builders depending on the strategy you want to use to build the AMI.
layout: docs
page_title: 'Amazon AMI - Builders'
sidebar_current: 'docs-builders-amazon'
---
# Amazon AMI Builder
@ -34,7 +34,7 @@ Packer supports the following builders at the moment:
not require running in AWS. This is an **advanced builder and should not be
used by newcomers**.
-> **Don't know which builder to use?** If in doubt, use the [amazon-ebs
-&gt; **Don't know which builder to use?** If in doubt, use the [amazon-ebs
builder](/docs/builders/amazon-ebs.html). It is much easier to use and Amazon
generally recommends EBS-backed images nowadays.
@ -72,7 +72,7 @@ Credentials are resolved in the following order:
Packer depends on the [AWS
SDK](https://aws.amazon.com/documentation/sdk-for-go/) to perform automatic
lookup using _credential chains_. In short, the SDK looks for credentials in
lookup using *credential chains*. In short, the SDK looks for credentials in
the following order:
1. Environment variables.
@ -93,7 +93,7 @@ the task's or instance's IAM role, if it has one.
The following policy document provides the minimal set permissions necessary for
Packer to work:
```json
``` json
{
"Version": "2012-10-17",
"Statement": [{
@ -152,7 +152,7 @@ The example policy below may help packer work with IAM roles. Note that this
example provides more than the minimal set of permissions needed for packer to
work, but specifics will depend on your use-case.
```json
``` json
{
"Sid": "PackerIAMPassRole",
"Effect": "Allow",
@ -173,6 +173,6 @@ fail. If that's the case, you might see an error like this:
==> amazon-ebs: Error querying AMI: AuthFailure: AWS was not able to validate the provided access credentials
If you suspect your system's date is wrong, you can compare it against
http://www.time.gov/. On Linux/OS X, you can run the `date` command to get the
<http://www.time.gov/>. On Linux/OS X, you can run the `date` command to get the
current time. If you're on Linux, you can try setting the time with ntp by
running `sudo ntpd -q`.

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-builders-azure-setup
page_title: Setup - Azure - Builders
description: |-
description: |
In order to build VMs in Azure, Packer needs various configuration options.
These options and how to obtain them are documented on this page.
layout: docs
page_title: 'Setup - Azure - Builders'
sidebar_current: 'docs-builders-azure-setup'
---
# Authorizing Packer Builds in Azure
@ -23,7 +23,7 @@ In order to build VMs in Azure Packer needs 6 configuration options to be specif
- `storage_account` - name of the storage account where your VHD(s) will be stored
-> Behind the scenes Packer uses the OAuth protocol to authenticate against Azure Active Directory and authorize requests to the Azure Service Management API. These topics are unnecessarily complicated so we will try to ignore them for the rest of this document.<br /><br />You do not need to understand how OAuth works in order to use Packer with Azure, though the Active Directory terms "service principal" and "role" will be useful for understanding Azure's access policies.
-&gt; Behind the scenes Packer uses the OAuth protocol to authenticate against Azure Active Directory and authorize requests to the Azure Service Management API. These topics are unnecessarily complicated so we will try to ignore them for the rest of this document.<br /><br />You do not need to understand how OAuth works in order to use Packer with Azure, though the Active Directory terms "service principal" and "role" will be useful for understanding Azure's access policies.
In order to get all of the items above, you will need a username and password for your Azure account.
@ -38,13 +38,13 @@ deploying Windows VMs.
There are three pieces of information you must provide to enable device login mode.
1. SubscriptionID
1. Resource Group - parent resource group that Packer uses to build an image.
1. Storage Account - storage account where the image will be placed.
1. SubscriptionID
2. Resource Group - parent resource group that Packer uses to build an image.
3. Storage Account - storage account where the image will be placed.
> Device login mode is enabled by not setting client_id and client_secret.
> Device login mode is enabled by not setting client\_id and client\_secret.
The device login flow asks that you open a web browser, navigate to http://aka.ms/devicelogin, and input the supplied
The device login flow asks that you open a web browser, navigate to <http://aka.ms/devicelogin>, and input the supplied
code. This authorizes the Packer for Azure application to act on your behalf. An OAuth token will be created, and stored
in the user's home directory (~/.azure/packer/oauth-TenantID.json). This token is used if the token file exists, and it
is refreshed as necessary. The token file prevents the need to continually execute the device login flow.
@ -53,11 +53,11 @@ is refreshed as necessary. The token file prevents the need to continually exec
To get the credentials above, we will need to install the Azure CLI. Please refer to Microsoft's official [installation guide](https://azure.microsoft.com/en-us/documentation/articles/xplat-cli-install/).
-> The guides below also use a tool called [`jq`](https://stedolan.github.io/jq/) to simplify the output from the Azure CLI, though this is optional. If you use homebrew you can simply `brew install node jq`.
-&gt; The guides below also use a tool called [`jq`](https://stedolan.github.io/jq/) to simplify the output from the Azure CLI, though this is optional. If you use homebrew you can simply `brew install node jq`.
If you already have node.js installed you can use `npm` to install `azure-cli`:
```shell
``` shell
$ npm install -g azure-cli --no-progress
```
@ -73,26 +73,24 @@ If you want more control or the script does not work for you, you can also use t
Login using the Azure CLI
```shell
``` shell
$ azure config mode arm
$ azure login -u USERNAME
```
Get your account information
```shell
``` shell
$ azure account list --json | jq -r '.[].name'
$ azure account set ACCOUNTNAME
$ azure account show --json | jq -r ".[] | .id"
```
-> Throughout this document when you see a command pipe to `jq` you may instead omit `--json` and everything after it, but the output will be more verbose. For example you can simply run `azure account list` instead.
-&gt; Throughout this document when you see a command pipe to `jq` you may instead omit `--json` and everything after it, but the output will be more verbose. For example you can simply run `azure account list` instead.
This will print out one line that look like this:
```
4f562e88-8caf-421a-b4da-e3f6786c52ec
```
4f562e88-8caf-421a-b4da-e3f6786c52ec
This is your `subscription_id`. Note it for later.
@ -100,7 +98,7 @@ This is your `subscription_id`. Note it for later.
A [resource group](https://azure.microsoft.com/en-us/documentation/articles/resource-group-overview/#resource-groups) is used to organize related resources. Resource groups and storage accounts are tied to a location. To see available locations, run:
```shell
``` shell
$ azure location list
# ...
@ -113,7 +111,7 @@ Your storage account (below) will need to use the same `GROUPNAME` and `LOCATION
We will need to create a storage account where your Packer artifacts will be stored. We will create a `LRS` storage account which is the least expensive price/GB at the time of writing.
```shell
``` shell
$ azure storage account create \
-g GROUPNAME \
-l LOCATION \
@ -121,7 +119,7 @@ $ azure storage account create \
--kind storage STORAGENAME
```
-> `LRS` is meant as a literal "LRS" and not as a variable.
-&gt; `LRS` is meant as a literal "LRS" and not as a variable.
Make sure that `GROUPNAME` and `LOCATION` are the same as above.
@ -129,7 +127,7 @@ Make sure that `GROUPNAME` and `LOCATION` are the same as above.
An application represents a way to authorize access to the Azure API. Note that you will need to specify a URL for your application (this is intended to be used for OAuth callbacks) but these do not actually need to be valid URLs.
```shell
``` shell
$ azure ad app create \
-n APPNAME \
-i APPURL \
@ -145,7 +143,7 @@ You cannot directly grant permissions to an application. Instead, you create a s
First, get the `APPID` for the application we just created.
```shell
``` shell
$ azure ad app list --json \
| jq '.[] | select(.displayName | contains("APPNAME")) | .appId'
# ...
@ -157,7 +155,7 @@ $ azure ad sp create --applicationId APPID
Finally, we will associate the proper permissions with our application's service principal. We're going to assign the `Owner` role to our Packer application and change the scope to manage our whole subscription. (The `Owner` role can be scoped to a specific resource group to further reduce the scope of the account.) This allows Packer to create temporary resource groups for each build.
```shell
``` shell
$ azure role assignment create \
--spn APPURL \
-o "Owner" \
@ -166,26 +164,25 @@ $ azure role assignment create \
There are a lot of pre-defined roles and you can define your own with more granular permissions, though this is out of scope. You can see a list of pre-configured roles via:
```shell
``` shell
$ azure role list --json \
| jq ".[] | {name:.Name, description:.Description}"
```
### Configuring Packer
Now (finally) everything has been setup in Azure. Let's get our configuration keys together:
Get `subscription_id`:
```shell
``` shell
$ azure account show --json \
| jq ".[] | .id"
```
Get `client_id`
```shell
``` shell
$ azure ad app list --json \
| jq '.[] | select(.displayName | contains("APPNAME")) | .appId'
```
@ -196,18 +193,18 @@ This cannot be retrieved. If you forgot this, you will have to delete and re-cre
Get `object_id` (OSTYpe=Windows only)
```shell
``` shell
azure ad sp show -n CLIENT_ID
```
Get `resource_group_name`
```shell
``` shell
$ azure group list
```
Get `storage_account`
```shell
``` shell
$ azure storage account list
```

View File

@ -1,9 +1,8 @@
---
description: 'Packer supports building VHDs in Azure Resource manager.'
layout: docs
sidebar_current: docs-builders-azure
page_title: Azure - Builders
description: |-
Packer supports building VHDs in Azure Resource manager.
page_title: 'Azure - Builders'
sidebar_current: 'docs-builders-azure'
---
# Azure Resource Manager Builder
@ -36,7 +35,7 @@ builder.
- `subscription_id` (string) Subscription under which the build will be performed. **The service principal specified in `client_id` must have full access to this subscription.**
- `capture_container_name` (string) Destination container name. Essentially the "directory" where your VHD will be organized in Azure. The captured VHD's URL will be https://<storage_account>.blob.core.windows.net/system/Microsoft.Compute/Images/<capture_container_name>/<capture_name_prefix>.xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.vhd.
- `capture_container_name` (string) Destination container name. Essentially the "directory" where your VHD will be organized in Azure. The captured VHD's URL will be <https://><storage_account>.blob.core.windows.net/system/Microsoft.Compute/Images/<capture_container_name>/<capture_name_prefix>.xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.vhd.
- `capture_name_prefix` (string) VHD prefix. The final artifacts will be named `PREFIX-osDisk.UUID` and `PREFIX-vmTemplate.UUID`.
@ -72,8 +71,8 @@ builder.
CLI example `azure vm image list -l westus -p Canonical -o UbuntuServer -k 16.04.0-LTS`
- `image_url` (string) Specify a custom VHD to use. If this value is set, do not set image_publisher, image_offer,
image_sku, or image_version.
- `image_url` (string) Specify a custom VHD to use. If this value is set, do not set image\_publisher, image\_offer,
image\_sku, or image\_version.
- `temp_compute_name` (string) temporary name assigned to the VM. If this value is not set, a random value will be assigned. Knowing the resource group and VM name allows one to execute commands to update the VM during a Packer build, e.g. attach a resource disk to the VM.
@ -99,13 +98,13 @@ builder.
communication with the VM, no public IP address is **used** or **provisioned**. This value should only be set if
Packer is executed from a host on the same subnet / virtual network.
- `virtual_network_resource_group_name` (string) If virtual_network_name is set, this value **may** also be set. If
virtual_network_name is set, and this value is not set the builder attempts to determine the resource group
- `virtual_network_resource_group_name` (string) If virtual\_network\_name is set, this value **may** also be set. If
virtual\_network\_name is set, and this value is not set the builder attempts to determine the resource group
containing the virtual network. If the resource group cannot be found, or it cannot be disambiguated, this value
should be set.
- `virtual_network_subnet_name` (string) If virtual_network_name is set, this value **may** also be set. If
virtual_network_name is set, and this value is not set the builder attempts to determine the subnet to use with
- `virtual_network_subnet_name` (string) If virtual\_network\_name is set, this value **may** also be set. If
virtual\_network\_name is set, and this value is not set the builder attempts to determine the subnet to use with
the virtual network. If the subnet cannot be found, or it cannot be disambiguated, this value should be set.
- `vm_size` (string) Size of the VM used for building. This can be changed
@ -114,12 +113,11 @@ builder.
CLI example `azure vm sizes -l westus`
## Basic Example
Here is a basic example for Azure.
```json
``` json
{
"type": "azure-arm",
@ -157,7 +155,7 @@ Please refer to the Azure [examples](https://github.com/hashicorp/packer/tree/ma
The following provisioner snippet shows how to sysprep a Windows VM. Deprovision should be the last operation executed by a build.
```json
``` json
{
"provisioners": [
{
@ -175,7 +173,7 @@ The following provisioner snippet shows how to sysprep a Windows VM. Deprovisio
The following provisioner snippet shows how to deprovision a Linux VM. Deprovision should be the last operation executed by a build.
```json
``` json
{
"provisioners": [
{
@ -192,27 +190,25 @@ The following provisioner snippet shows how to deprovision a Linux VM. Deprovis
To learn more about the Linux deprovision process please see WALinuxAgent's [README](https://github.com/Azure/WALinuxAgent/blob/master/README.md).
#### skip_clean
#### skip\_clean
Customers have reported issues with the deprovision process where the builder hangs. The error message is similar to the following.
```
Build 'azure-arm' errored: Retryable error: Error removing temporary script at /tmp/script_9899.sh: ssh: handshake failed: EOF
```
Build 'azure-arm' errored: Retryable error: Error removing temporary script at /tmp/script_9899.sh: ssh: handshake failed: EOF
One solution is to set skip_clean to true in the provisioner. This prevents Packer from cleaning up any helper scripts uploaded to the VM during the build.
One solution is to set skip\_clean to true in the provisioner. This prevents Packer from cleaning up any helper scripts uploaded to the VM during the build.
## Defaults
The Azure builder attempts to pick default values that provide for a just works experience. These values can be changed by the user to more suitable values.
* The default user name is packer not root as in other builders. Most distros on Azure do not allow root to SSH to a VM hence the need for a non-root default user. Set the ssh_username option to override the default value.
* The default VM size is Standard_A1. Set the vm_size option to override the default value.
* The default image version is latest. Set the image_version option to override the default value.
- The default user name is packer not root as in other builders. Most distros on Azure do not allow root to SSH to a VM hence the need for a non-root default user. Set the ssh\_username option to override the default value.
- The default VM size is Standard\_A1. Set the vm\_size option to override the default value.
- The default image version is latest. Set the image\_version option to override the default value.
## Implementation
~> **Warning!** This is an advanced topic. You do not need to understand the implementation to use the Azure
~&gt; **Warning!** This is an advanced topic. You do not need to understand the implementation to use the Azure
builder.
The Azure builder uses ARM
@ -225,18 +221,18 @@ form `packer-Resource-Group-<random>`. The value `<random>` is a random value th
packer. The `<random>` value is re-used as much as possible when naming resources, so users can better identify and
group these transient resources when seen in their subscription.
> The VHD is created on a user specified storage account, not a random one created at runtime. When a virtual machine
is captured the resulting VHD is stored on the same storage account as the source VHD. The VHD created by Packer must
persist after a build is complete, which is why the storage account is set by the user.
> The VHD is created on a user specified storage account, not a random one created at runtime. When a virtual machine
> is captured the resulting VHD is stored on the same storage account as the source VHD. The VHD created by Packer must
> persist after a build is complete, which is why the storage account is set by the user.
The basic steps for a build are:
1. Create a resource group.
1. Validate and deploy a VM template.
1. Execute provision - defined by the user; typically shell commands.
1. Power off and capture the VM.
1. Delete the resource group.
1. Delete the temporary VM's OS disk.
1. Create a resource group.
2. Validate and deploy a VM template.
3. Execute provision - defined by the user; typically shell commands.
4. Power off and capture the VM.
5. Delete the resource group.
6. Delete the temporary VM's OS disk.
The templates used for a build are currently fixed in the code. There is a template for Linux, Windows, and KeyVault.
The templates are themselves templated with place holders for names, passwords, SSH keys, certificates, etc.
@ -245,15 +241,15 @@ The templates are themselves templated with place holders for names, passwords,
The Azure builder creates the following random values at runtime.
* Administrator Password: a random 32-character value using the *password alphabet*.
* Certificate: a 2,048-bit certificate used to secure WinRM communication. The certificate is valid for 24-hours, which starts roughly at invocation time.
* Certificate Password: a random 32-character value using the *password alphabet* used to protect the private key of the certificate.
* Compute Name: a random 15-character name prefixed with pkrvm; the name of the VM.
* Deployment Name: a random 15-character name prefixed with pkfdp; the name of the deployment.
* KeyVault Name: a random 15-character name prefixed with pkrkv.
* OS Disk Name: a random 15-character name prefixed with pkros.
* Resource Group Name: a random 33-character name prefixed with packer-Resource-Group-.
* SSH Key Pair: a 2,048-bit asymmetric key pair; can be overriden by the user.
- Administrator Password: a random 32-character value using the *password alphabet*.
- Certificate: a 2,048-bit certificate used to secure WinRM communication. The certificate is valid for 24-hours, which starts roughly at invocation time.
- Certificate Password: a random 32-character value using the *password alphabet* used to protect the private key of the certificate.
- Compute Name: a random 15-character name prefixed with pkrvm; the name of the VM.
- Deployment Name: a random 15-character name prefixed with pkfdp; the name of the deployment.
- KeyVault Name: a random 15-character name prefixed with pkrkv.
- OS Disk Name: a random 15-character name prefixed with pkros.
- Resource Group Name: a random 33-character name prefixed with packer-Resource-Group-.
- SSH Key Pair: a 2,048-bit asymmetric key pair; can be overriden by the user.
The default alphabet used for random values is **0123456789bcdfghjklmnpqrstvwxyz**. The alphabet was reduced (no
vowels) to prevent running afoul of Azure decency controls.
@ -271,20 +267,20 @@ certificate in KeyVault, and Azure will ensure the certificate is injected as pa
The basic steps for a Windows build are:
1. Create a resource group.
1. Validate and deploy a KeyVault template.
1. Validate and deploy a VM template.
1. Execute provision - defined by the user; typically shell commands.
1. Power off and capture the VM.
1. Delete the resource group.
1. Delete the temporary VM's OS disk.
1. Create a resource group.
2. Validate and deploy a KeyVault template.
3. Validate and deploy a VM template.
4. Execute provision - defined by the user; typically shell commands.
5. Power off and capture the VM.
6. Delete the resource group.
7. Delete the temporary VM's OS disk.
A Windows build requires two templates and two deployments. Unfortunately, the KeyVault and VM cannot be deployed at
the same time hence the need for two templates and deployments. The time required to deploy a KeyVault template is
minimal, so overall impact is small.
> The KeyVault certificate is protected using the object_id of the SPN. This is why Windows builds require object_id,
and an SPN. The KeyVault is deleted when the resource group is deleted.
> The KeyVault certificate is protected using the object\_id of the SPN. This is why Windows builds require object\_id,
> and an SPN. The KeyVault is deleted when the resource group is deleted.
See the [examples/azure](https://github.com/hashicorp/packer/tree/master/examples/azure) folder in the packer project
for more examples.

View File

@ -1,12 +1,12 @@
---
layout: docs
sidebar_current: docs-builders-cloudstack
page_title: CloudStack - Builders
description: |-
description: |
The cloudstack Packer builder is able to create new templates for use with
CloudStack. The builder takes either an ISO or an existing template as it's
source, runs any provisioning necessary on the instance after launching it and
then creates a new template from that instance.
layout: docs
page_title: 'CloudStack - Builders'
sidebar_current: 'docs-builders-cloudstack'
---
# CloudStack Builder
@ -127,7 +127,7 @@ builder.
Here is a basic example.
```json
``` json
{
"type": "cloudstack",
"api_url": "https://cloudstack.company.com/client/api",

View File

@ -1,11 +1,11 @@
---
layout: docs
sidebar_current: docs-builders-custom
page_title: Custom - Builders
description: |-
description: |
Packer is extensible, allowing you to write new builders without having to
modify the core source code of Packer itself. Documentation for creating new
builders is covered in the custom builders page of the Packer plugin section.
layout: docs
page_title: 'Custom - Builders'
sidebar_current: 'docs-builders-custom'
---
# Custom Builder

View File

@ -1,16 +1,15 @@
---
layout: docs
sidebar_current: docs-builders-digitalocean
page_title: DigitalOcean - Builders
description: |-
description: |
The digitalocean Packer builder is able to create new images for use with
DigitalOcean. The builder takes a source image, runs any provisioning
necessary on the image after launching it, then snapshots it into a reusable
image. This reusable image can then be used as the foundation of new servers
that are launched within DigitalOcean.
layout: docs
page_title: 'DigitalOcean - Builders'
sidebar_current: 'docs-builders-digitalocean'
---
# DigitalOcean Builder
Type: `digitalocean`
@ -42,17 +41,17 @@ builder.
- `image` (string) - The name (or slug) of the base image to use. This is the
image that will be used to launch a new droplet and provision it. See
[https://developers.digitalocean.com/documentation/v2/\#list-all-images](https://developers.digitalocean.com/documentation/v2/#list-all-images) for
<https://developers.digitalocean.com/documentation/v2/#list-all-images> for
details on how to get a list of the accepted image names/slugs.
- `region` (string) - The name (or slug) of the region to launch the
droplet in. Consequently, this is the region where the snapshot will
be available. See
[https://developers.digitalocean.com/documentation/v2/\#list-all-regions](https://developers.digitalocean.com/documentation/v2/#list-all-regions) for
<https://developers.digitalocean.com/documentation/v2/#list-all-regions> for
the accepted region names/slugs.
- `size` (string) - The name (or slug) of the droplet size to use. See
[https://developers.digitalocean.com/documentation/v2/\#list-all-sizes](https://developers.digitalocean.com/documentation/v2/#list-all-sizes) for
<https://developers.digitalocean.com/documentation/v2/#list-all-sizes> for
the accepted size names/slugs.
### Optional:
@ -86,13 +85,12 @@ builder.
- `user_data_file` (string) - Path to a file that will be used for the user
data when launching the Droplet.
## Basic Example
Here is a basic example. It is completely valid as soon as you enter your own
access tokens:
```json
``` json
{
"type": "digitalocean",
"api_token": "YOUR API KEY",

View File

@ -1,11 +1,11 @@
---
layout: docs
sidebar_current: docs-builders-docker
page_title: Docker - Builders
description: |-
description: |
The docker Packer builder builds Docker images using Docker. The builder
starts a Docker container, runs provisioners within this container, then
exports the container for reuse or commits the image.
layout: docs
page_title: 'Docker - Builders'
sidebar_current: 'docs-builders-docker'
---
# Docker Builder
@ -33,7 +33,7 @@ what [platforms Docker supports and how to install onto them](https://docs.docke
Below is a fully functioning example. It doesn't do anything useful, since no
provisioners are defined, but it will effectively repackage an image.
```json
``` json
{
"type": "docker",
"image": "ubuntu",
@ -47,7 +47,7 @@ Below is another example, the same as above but instead of exporting the running
container, this one commits the container to an image. The image can then be
more easily tagged, pushed, etc.
```json
``` json
{
"type": "docker",
"image": "ubuntu",
@ -66,7 +66,7 @@ Docker](https://docs.docker.com/engine/reference/commandline/commit/).
Example uses of all of the options, assuming one is building an NGINX image
from ubuntu as an simple example:
```json
``` json
{
"type": "docker",
"image": "ubuntu",
@ -221,7 +221,7 @@ created image. This is accomplished using a sequence definition (a collection of
post-processors that are treated as as single pipeline, see
[Post-Processors](/docs/templates/post-processors.html) for more information):
```json
``` json
{
"post-processors": [
[
@ -245,7 +245,7 @@ pushing the image to a container repository.
If you want to do this manually, however, perhaps from a script, you can import
the image using the process below:
```shell
``` shell
$ docker import - registry.mydomain.com/mycontainer:latest < artifact.tar
```
@ -260,7 +260,7 @@ which tags and pushes an image. This is accomplished using a sequence definition
(a collection of post-processors that are treated as as single pipeline, see
[Post-Processors](/docs/templates/post-processors.html) for more information):
```json
``` json
{
"post-processors": [
[
@ -285,7 +285,7 @@ Going a step further, if you wanted to tag and push an image to multiple
container repositories, this could be accomplished by defining two,
nearly-identical sequence definitions, as demonstrated by the example below:
```json
``` json
{
"post-processors": [
[
@ -317,7 +317,7 @@ Packer can tag and push images for use in
processors work as described above and example configuration properties are
shown below:
```json
``` json
{
"post-processors": [
[

View File

@ -1,11 +1,11 @@
---
layout: docs
sidebar_current: docs-builders-file
page_title: File - Builders
description: |-
description: |
The file Packer builder is not really a builder, it just creates an artifact
from a file. It can be used to debug post-processors without incurring high
wait times. It does not run any provisioners.
layout: docs
page_title: 'File - Builders'
sidebar_current: 'docs-builders-file'
---
# File Builder
@ -21,7 +21,7 @@ wait times. It does not run any provisioners.
Below is a fully functioning example. It doesn't do anything useful, since no
provisioners are defined, but it will connect to the specified host via ssh.
```json
``` json
{
"type": "file",
"content": "Lorem ipsum dolor sit amet",

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-builders-googlecompute
page_title: Google Compute - Builders
description: |-
description: |
The googlecompute Packer builder is able to create images for use with
Google Cloud Compute Engine (GCE) based on existing images.
layout: docs
page_title: 'Google Compute - Builders'
sidebar_current: 'docs-builders-googlecompute'
---
# Google Compute Builder
@ -17,6 +17,7 @@ Compute Engine](https://cloud.google.com/products/compute-engine)(GCE) based on
existing images. Building GCE images from scratch is not possible from Packer at
this time. For building images from scratch, please see
[Building GCE Images from Scratch](https://cloud.google.com/compute/docs/tutorials/building-images).
## Authentication
Authenticating with Google Cloud services requires at most one JSON file, called
@ -38,7 +39,7 @@ scopes when launching the instance.
For `gcloud`, do this via the `--scopes` parameter:
```shell
``` shell
$ gcloud compute --project YOUR_PROJECT instances create "INSTANCE-NAME" ... \
--scopes "https://www.googleapis.com/auth/compute,https://www.googleapis.com/auth/devstorage.full_control" \
```
@ -46,9 +47,9 @@ $ gcloud compute --project YOUR_PROJECT instances create "INSTANCE-NAME" ... \
For the [Google Developers Console](https://console.developers.google.com):
1. Choose "Show advanced options"
1. Tick "Enable Compute Engine service account"
1. Choose "Read Write" for Compute
1. Chose "Full" for "Storage"
2. Tick "Enable Compute Engine service account"
3. Choose "Read Write" for Compute
4. Chose "Full" for "Storage"
**The service account will be used automatically by Packer as long as there is
no *account file* specified in the Packer configuration file.**
@ -63,12 +64,12 @@ straightforwarded, it is documented here.
1. Log into the [Google Developers
Console](https://console.developers.google.com) and select a project.
1. Under the "APIs & Auth" section, click "Credentials."
2. Under the "APIs & Auth" section, click "Credentials."
1. Click the "Create new Client ID" button, select "Service account", and click
3. Click the "Create new Client ID" button, select "Service account", and click
"Create Client ID"
1. Click "Generate new JSON key" for the Service Account you just created. A
4. Click "Generate new JSON key" for the Service Account you just created. A
JSON file will be downloaded automatically. This is your *account file*.
### Precedence of Authentication Methods
@ -77,33 +78,29 @@ Packer looks for credentials in the following places, preferring the first locat
1. A `account_file` option in your packer file.
1. A JSON file (Service Account) whose path is specified by the `GOOGLE_APPLICATION_CREDENTIALS` environment variable.
2. A JSON file (Service Account) whose path is specified by the `GOOGLE_APPLICATION_CREDENTIALS` environment variable.
1. A JSON file in a location known to the `gcloud` command-line tool. (`gcloud` creates it when it's configured)
3. A JSON file in a location known to the `gcloud` command-line tool. (`gcloud` creates it when it's configured)
On Windows, this is:
```
%APPDATA%/gcloud/application_default_credentials.json
```
On other systems:
```
$HOME/.config/gcloud/application_default_credentials.json
```
1. On Google Compute Engine and Google App Engine Managed VMs, it fetches credentials from the metadata server. (Needs a correct VM authentication scope configuration, see above)
4. On Google Compute Engine and Google App Engine Managed VMs, it fetches credentials from the metadata server. (Needs a correct VM authentication scope configuration, see above)
## Basic Example
Below is a fully functioning example. It doesn't do anything useful, since no
provisioners or startup-script metadata are defined, but it will effectively
repackage an existing GCE image. The account_file is obtained in the previous
repackage an existing GCE image. The account\_file is obtained in the previous
section. If it parses as JSON it is assumed to be the file itself, otherwise it
is assumed to be the path to the file containing the JSON.
```json
``` json
{
"builders": [
{
@ -217,7 +214,7 @@ builder.
- `on_host_maintenance` (string) - Sets Host Maintenance Option. Valid
choices are `MIGRATE` and `TERMINATE`. Please see [GCE Instance Scheduling
Options](https://cloud.google.com/compute/docs/instances/setting-instance-scheduling-options),
as not all machine_types support `MIGRATE` (i.e. machines with GPUs).
as not all machine\_types support `MIGRATE` (i.e. machines with GPUs).
If preemptible is true this can only be `TERMINATE`. If preemptible
is false, it defaults to `MIGRATE`
@ -229,7 +226,7 @@ builder.
- `scopes` (array of strings) - The service account scopes for launched instance.
Defaults to:
```json
``` json
[
"https://www.googleapis.com/auth/userinfo.email",
"https://www.googleapis.com/auth/compute",
@ -273,10 +270,11 @@ when a startup script fails.
### Windows
A Windows startup script can only be provided via the 'windows-startup-script-cmd' instance
creation `metadata` field. The builder will _not_ wait for a Windows startup scripts to
creation `metadata` field. The builder will *not* wait for a Windows startup scripts to
terminate. You have to ensure that it finishes before the instance shuts down.
### Logging
Startup script logs can be copied to a Google Cloud Storage (GCS) location specified via the
'startup-script-log-dest' instance creation `metadata` field. The GCS location must be writeable by
the credentials provided in the builder config's `account_file`.

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-builders-hyperv-iso
page_title: Hyper-V ISO - Builders
description: |-
description: |
The Hyper-V Packer builder is able to create Hyper-V virtual machines and
export them.
layout: docs
page_title: 'Hyper-V ISO - Builders'
sidebar_current: 'docs-builders-hyperv-iso'
---
# Hyper-V Builder (from an ISO)
@ -25,7 +25,7 @@ Here is a basic example. This example is not functional. It will start the
OS installer but then fail because we don't provide the preseed file for
Ubuntu to self-install. Still, the example serves to show the basic configuration:
```json
``` json
{
"type": "hyperv-iso",
"iso_url": "http://releases.ubuntu.com/12.04/ubuntu-12.04.5-server-amd64.iso",
@ -162,7 +162,7 @@ can be configured for this builder.
- `ram_size` (integer) - The size, in megabytes, of the ram to create
for the VM. By default, this is 1 GB.
* `secondary_iso_images` (array of strings) - A list of iso paths to attached to a
- `secondary_iso_images` (array of strings) - A list of iso paths to attached to a
VM when it is booted. This is most useful for unattended Windows installs, which
look for an `Autounattend.xml` file on removable media. By default, no
secondary iso will be attached.
@ -263,7 +263,7 @@ In addition to the special keys, each command to type is treated as a
[template engine](/docs/templates/engine.html).
The available variables are:
* `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server
- `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server
that is started serving the directory specified by the `http_directory`
configuration parameter. If `http_directory` isn't specified, these will
be blank!
@ -271,7 +271,7 @@ The available variables are:
Example boot command. This is actually a working boot command used to start
an Ubuntu 12.04 installer:
```json
``` json
[
"<esc><esc><enter><wait>",
"/install/vmlinuz noapic ",
@ -305,7 +305,7 @@ There is a [windows version of mkisofs](http://opensourcepack.blogspot.co.uk/p/c
Example powershell script. This is an actually working powershell script used to create a Windows answer iso:
```powershell
``` powershell
$isoFolder = "answer-iso"
if (test-path $isoFolder){
remove-item $isoFolder -Force -Recurse
@ -339,12 +339,11 @@ if (test-path $isoFolder){
}
```
## Example For Windows Server 2012 R2 Generation 2
Packer config:
```json
``` json
{
"builders": [
{
@ -402,7 +401,7 @@ Packer config:
autounattend.xml:
```xml
``` xml
<?xml version="1.0" encoding="utf-8"?>
<unattend xmlns="urn:schemas-microsoft-com:unattend">
<settings pass="windowsPE">
@ -799,12 +798,11 @@ Finish Setup cache proxy during installation -->
</settings>
<cpi:offlineImage cpi:source="wim:c:/projects/baseboxes/9600.16384.winblue_rtm.130821-1623_x64fre_server_eval_en-us-irm_sss_x64free_en-us_dv5_slipstream/sources/install.wim#Windows Server 2012 R2 SERVERDATACENTER" xmlns:cpi="urn:schemas-microsoft-com:cpi" />
</unattend>
```
sysprep-unattend.xml:
```text
``` text
<?xml version="1.0" encoding="utf-8"?>
<unattend xmlns="urn:schemas-microsoft-com:unattend">
<settings pass="generalize">
@ -873,7 +871,7 @@ a virtual switch with an `External` connection type.
### Packer config:
```json
``` json
{
"variables": {
"vm_name": "ubuntu-xenial",
@ -924,7 +922,7 @@ a virtual switch with an `External` connection type.
### preseed.cfg:
```text
``` text
## Options to set on the command line
d-i debian-installer/locale string en_US.utf8
d-i console-setup/ask_detect boolean false

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-builders-hyperv
page_title: Hyper-V - Builders
description: |-
description: |
The Hyper-V Packer builder is able to create Hyper-V virtual machines and
export them.
layout: docs
page_title: 'Hyper-V - Builders'
sidebar_current: 'docs-builders-hyperv'
---
# HyperV Builder

View File

@ -1,10 +1,10 @@
---
layout: docs
page_title: Builders
sidebar_current: docs-builders
description: |-
description: |
Builders are responsible for creating machines and generating images from them
for various platforms.
layout: docs
page_title: Builders
sidebar_current: 'docs-builders'
---
# Builders

View File

@ -1,12 +1,12 @@
---
layout: docs
sidebar_current: docs-builders-null
page_title: Null - Builders
description: |-
description: |
The null Packer builder is not really a builder, it just sets up an SSH
connection and runs the provisioners. It can be used to debug provisioners
without incurring high wait times. It does not create any kind of image or
artifact.
layout: docs
page_title: 'Null - Builders'
sidebar_current: 'docs-builders-null'
---
# Null Builder
@ -23,7 +23,7 @@ artifact.
Below is a fully functioning example. It doesn't do anything useful, since no
provisioners are defined, but it will connect to the specified host via ssh.
```json
``` json
{
"type": "null",
"ssh_host": "127.0.0.1",

View File

@ -1,9 +1,8 @@
---
description: 'The 1&1 builder is able to create images for 1&1 cloud.'
layout: docs
sidebar_current: docs-builders-oneandone
page_title: 1&1 - Builders
description: |-
The 1&1 builder is able to create images for 1&1 cloud.
page_title: '1&1 - Builders'
sidebar_current: 'docs-builders-oneandone'
---
# 1&1 Builder
@ -34,18 +33,17 @@ builder.
- `disk_size` (string) - Amount of disk space for this image in GB. Defaults to "50"
- `image_name` (string) - Resulting image. If "image_name" is not provided Packer will generate it
- `image_name` (string) - Resulting image. If "image\_name" is not provided Packer will generate it
- `retries` (int) - Number of retries Packer will make status requests while waiting for the build to complete. Default value "600".
- `url` (string) - Endpoint for the 1&1 REST API. Default URL "https://cloudpanel-api.1and1.com/v1"
- `url` (string) - Endpoint for the 1&1 REST API. Default URL "<https://cloudpanel-api.1and1.com/v1>"
## Example
Here is a basic example:
```json
``` json
{
"builders":[
{

View File

@ -1,13 +1,13 @@
---
layout: docs
sidebar_current: docs-builders-openstack
page_title: OpenStack - Builders
description: |-
description: |
The openstack Packer builder is able to create new images for use with
OpenStack. The builder takes a source image, runs any provisioning necessary
on the image after launching it, then creates a new reusable image. This
reusable image can then be used as the foundation of new servers that are
launched within OpenStack.
layout: docs
page_title: 'OpenStack - Builders'
sidebar_current: 'docs-builders-openstack'
---
# OpenStack Builder
@ -25,9 +25,9 @@ created. This simplifies configuration quite a bit.
The builder does *not* manage images. Once it creates an image, it is up to you
to use it or delete it.
~> **OpenStack Liberty or later requires OpenSSL!** To use the OpenStack
~&gt; **OpenStack Liberty or later requires OpenSSL!** To use the OpenStack
builder with OpenStack Liberty (Oct 2015) or later you need to have OpenSSL
installed _if you are using temporary key pairs_, i.e. don't use
installed *if you are using temporary key pairs*, i.e. don't use
[`ssh_keypair_name`](openstack.html#ssh_keypair_name) nor
[`ssh_password`](/docs/templates/communicator.html#ssh_password). All major
OS'es have OpenSSL installed by default except Windows.
@ -77,13 +77,13 @@ builder.
cluster will be used. This may be required for some OpenStack clusters.
- `cacert` (string) - Custom CA certificate file path.
If ommited the OS_CACERT environment variable can be used.
If ommited the OS\_CACERT environment variable can be used.
- `config_drive` (boolean) - Whether or not nova should use ConfigDrive for
cloud-init metadata.
- `cert` (string) - Client certificate file path for SSL client authentication.
If omitted the OS_CERT environment variable can be used.
If omitted the OS\_CERT environment variable can be used.
- `domain_name` or `domain_id` (string) - The Domain name or ID you are
authenticating with. OpenStack installations require this if identity v3 is used.
@ -109,7 +109,7 @@ builder.
done over an insecure connection. By default this is false.
- `key` (string) - Client private key file path for SSL client authentication.
If ommited the OS_KEY environment variable can be used.
If ommited the OS\_KEY environment variable can be used.
- `metadata` (object of key/value strings) - Glance metadata that will be
applied to the image.
@ -166,14 +166,14 @@ builder.
- `temporary_key_pair_name` (string) - The name of the temporary key pair
to generate. By default, Packer generates a name that looks like
`packer_<UUID>`, where \<UUID\> is a 36 character unique identifier.
`packer_<UUID>`, where &lt;UUID&gt; is a 36 character unique identifier.
- `tenant_id` or `tenant_name` (string) - The tenant ID or name to boot the
instance into. Some OpenStack installations require this. If not specified,
Packer will use the environment variable `OS_TENANT_NAME`, if set. Tenant
is also called Project in later versions of OpenStack.
- `use_floating_ip` (boolean) - _Deprecated_ use `floating_ip` or `floating_ip_pool`
- `use_floating_ip` (boolean) - *Deprecated* use `floating_ip` or `floating_ip_pool`
instead.
- `user_data` (string) - User data to apply when launching the instance. Note
@ -187,7 +187,7 @@ builder.
Here is a basic example. This is a example to build on DevStack running in a VM.
```json
``` json
{
"type": "openstack",
"identity_endpoint": "http://<destack-ip>:5000/v3",
@ -202,7 +202,6 @@ Here is a basic example. This is a example to build on DevStack running in a VM.
"flavor": "m1.tiny",
"insecure": "true"
}
```
## Basic Example: Rackspace public cloud
@ -210,7 +209,7 @@ Here is a basic example. This is a example to build on DevStack running in a VM.
Here is a basic example. This is a working example to build a Ubuntu 12.04 LTS
(Precise Pangolin) on Rackspace OpenStack cloud offering.
```json
``` json
{
"type": "openstack",
"username": "foo",
@ -228,7 +227,7 @@ Here is a basic example. This is a working example to build a Ubuntu 12.04 LTS
This example builds an Ubuntu 14.04 image on a private OpenStack cloud, powered
by Metacloud.
```json
``` json
{
"type": "openstack",
"ssh_username": "root",
@ -263,7 +262,7 @@ This means you can use `OS_USERNAME` or `OS_USERID`, `OS_TENANT_ID` or
The above example would be equivalent to an RC file looking like this :
```shell
``` shell
export OS_AUTH_URL="https://identity.myprovider/v3"
export OS_USERNAME="myuser"
export OS_PASSWORD="password"
@ -274,18 +273,15 @@ export OS_PROJECT_DOMAIN_NAME="mydomain"
## Notes on OpenStack Authorization
The simplest way to get all settings for authorization agains OpenStack is to
go into the OpenStack Dashboard (Horizon) select your _Project_ and navigate
_Project, Access & Security_, select _API Access_ and _Download OpenStack RC
File v3_. Source the file, and select your wanted region by setting
environment variable `OS_REGION_NAME` or `OS_REGION_ID` and `export
OS_TENANT_NAME=$OS_PROJECT_NAME` or `export OS_TENANT_ID=$OS_PROJECT_ID`.
go into the OpenStack Dashboard (Horizon) select your *Project* and navigate
*Project, Access & Security*, select *API Access* and *Download OpenStack RC
File v3*. Source the file, and select your wanted region by setting
environment variable `OS_REGION_NAME` or `OS_REGION_ID` and `export OS_TENANT_NAME=$OS_PROJECT_NAME` or `export OS_TENANT_ID=$OS_PROJECT_ID`.
~> `OS_TENANT_NAME` or `OS_TENANT_ID` must be used even with Identity v3,
~&gt; `OS_TENANT_NAME` or `OS_TENANT_ID` must be used even with Identity v3,
`OS_PROJECT_NAME` and `OS_PROJECT_ID` has no effect in Packer.
To troubleshoot authorization issues test you environment variables with the
OpenStack cli. It can be installed with
```
$ pip install --user python-openstackclient
```
$ pip install --user python-openstackclient

View File

@ -1,11 +1,11 @@
---
layout: docs
sidebar_current: docs-builders-parallels-iso
page_title: Parallels ISO - Builders
description: |-
description: |
The Parallels Packer builder is able to create Parallels Desktop for Mac
virtual machines and export them in the PVM format, starting from an ISO
image.
layout: docs
page_title: 'Parallels ISO - Builders'
sidebar_current: 'docs-builders-parallels-iso'
---
# Parallels Builder (from an ISO)
@ -27,7 +27,7 @@ Here is a basic example. This example is not functional. It will start the OS
installer but then fail because we don't provide the preseed file for Ubuntu to
self-install. Still, the example serves to show the basic configuration:
```json
``` json
{
"type": "parallels-iso",
"guest_os_type": "ubuntu",
@ -88,7 +88,6 @@ builder.
- `ssh_username` (string) - The username to use to SSH into the machine once
the OS is installed.
### Optional:
- `boot_command` (array of strings) - This is an array of commands to type
@ -316,7 +315,7 @@ available variables are:
Example boot command. This is actually a working boot command used to start an
Ubuntu 12.04 installer:
```text
``` text
[
"<esc><esc><enter><wait>",
"/install/vmlinuz noapic ",
@ -342,7 +341,7 @@ Extra `prlctl` commands are defined in the template in the `prlctl` section. An
example is shown below that sets the memory and number of CPUs within the
virtual machine:
```json
``` json
{
"prlctl": [
["set", "{{.Name}}", "--memsize", "1024"],

View File

@ -1,11 +1,11 @@
---
layout: docs
sidebar_current: docs-builders-parallels-pvm
page_title: Parallels PVM - Builders
description: |-
description: |
This Parallels builder is able to create Parallels Desktop for Mac virtual
machines and export them in the PVM format, starting from an existing PVM
(exported virtual machine image).
layout: docs
page_title: 'Parallels PVM - Builders'
sidebar_current: 'docs-builders-parallels-pvm'
---
# Parallels Builder (from a PVM)
@ -26,7 +26,7 @@ create the image. The imported machine is deleted prior to finishing the build.
Here is a basic example. This example is functional if you have an PVM matching
the settings here.
```json
``` json
{
"type": "parallels-pvm",
"parallels_tools_flavor": "lin",
@ -246,7 +246,7 @@ Extra `prlctl` commands are defined in the template in the `prlctl` section. An
example is shown below that sets the memory and number of CPUs within the
virtual machine:
```json
``` json
{
"prlctl": [
["set", "{{.Name}}", "--memsize", "1024"],

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-builders-parallels
page_title: Parallels - Builders
description: |-
description: |
The Parallels Packer builder is able to create Parallels Desktop for Mac
virtual machines and export them in the PVM format.
layout: docs
page_title: 'Parallels - Builders'
sidebar_current: 'docs-builders-parallels'
---
# Parallels Builder

View File

@ -1,9 +1,8 @@
---
description: 'The ProfitBricks builder is able to create images for ProfitBricks cloud.'
layout: docs
sidebar_current: docs-builders-profitbricks
page_title: ProfitBricks - Builders
description: |-
The ProfitBricks builder is able to create images for ProfitBricks cloud.
page_title: 'ProfitBricks - Builders'
sidebar_current: 'docs-builders-profitbricks'
---
# ProfitBricks Builder
@ -26,10 +25,9 @@ builder.
- `image` (string) - ProfitBricks volume image. Only Linux public images are supported. To obtain full list of available images you can use [ProfitBricks CLI](https://github.com/profitbricks/profitbricks-cli#image).
- `password` (string) - ProfitBricks password. This can be specified via environment variable `PROFITBRICKS_PASSWORD', if provided. The value definded in the config has precedence over environemnt variable.
- `username` (string) - ProfitBricks username. This can be specified via environment variable `PROFITBRICKS_USERNAME', if provided. The value definded in the config has precedence over environemnt variable.
- `password` (string) - ProfitBricks password. This can be specified via environment variable \`PROFITBRICKS\_PASSWORD', if provided. The value definded in the config has precedence over environemnt variable.
- `username` (string) - ProfitBricks username. This can be specified via environment variable \`PROFITBRICKS\_USERNAME', if provided. The value definded in the config has precedence over environemnt variable.
### Optional
@ -49,14 +47,13 @@ builder.
- `snapshot_password` (string) - Password for the snapshot.
- `url` (string) - Endpoint for the ProfitBricks REST API. Default URL "https://api.profitbricks.com/rest/v2"
- `url` (string) - Endpoint for the ProfitBricks REST API. Default URL "<https://api.profitbricks.com/rest/v2>"
## Example
Here is a basic example:
```json
``` json
{
"builders": [
{

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-builders-qemu
page_title: QEMU - Builders
description: |-
description: |
The Qemu Packer builder is able to create KVM and Xen virtual machine images.
Support for Xen is experimental at this time.
layout: docs
page_title: 'QEMU - Builders'
sidebar_current: 'docs-builders-qemu'
---
# QEMU Builder
@ -26,7 +26,7 @@ necessary to run the virtual machine on KVM or Xen.
Here is a basic example. This example is functional so long as you fixup paths
to files, URLS for ISOs and checksums.
```json
``` json
{
"builders":
[
@ -149,12 +149,12 @@ Linux server and have not enabled X11 forwarding (`ssh -X`).
source, resize it according to `disk_size` and boot the image.
- `disk_interface` (string) - The interface to use for the disk. Allowed
values include any of "ide", "scsi", "virtio" or "virtio-scsi"^* . Note also
values include any of "ide", "scsi", "virtio" or "virtio-scsi"^\* . Note also
that any boot commands or kickstart type scripts must have proper
adjustments for resulting device names. The Qemu builder uses "virtio" by
default.
^* Please be aware that use of the "scsi" disk interface has been disabled
^\* Please be aware that use of the "scsi" disk interface has been disabled
by Red Hat due to a bug described
[here](https://bugzilla.redhat.com/show_bug.cgi?id=1019220).
If you are running Qemu on RHEL or a RHEL variant such as CentOS, you
@ -174,8 +174,7 @@ Linux server and have not enabled X11 forwarding (`ssh -X`).
and \[\]) are allowed. Directory names are also allowed, which will add all
the files found in the directory to the floppy. The summary size of the
listed files must not exceed 1.44 MB. The supported ways to move large
files into the OS are using `http_directory` or [the file provisioner](
https://www.packer.io/docs/provisioners/file.html).
files into the OS are using `http_directory` or [the file provisioner](https://www.packer.io/docs/provisioners/file.html).
- `floppy_dirs` (array of strings) - A list of directories to place onto
the floppy disk recursively. This is similar to the `floppy_files` option
@ -254,7 +253,7 @@ Linux server and have not enabled X11 forwarding (`ssh -X`).
switch/value pairs. Any value specified as an empty string is ignored. All
values after the switch are concatenated with no separator.
~> **Warning:** The qemu command line allows extreme flexibility, so beware
~&gt; **Warning:** The qemu command line allows extreme flexibility, so beware
of conflicting arguments causing failures of your run. For instance, using
--no-acpi could break the ability to send power signal type commands (e.g.,
shutdown -P now) to the virtual machine, thus preventing proper shutdown. To see
@ -263,7 +262,7 @@ command. The arguments are all printed for review.
The following shows a sample usage:
```json
``` json
{
"qemuargs": [
[ "-m", "1024M" ],
@ -282,23 +281,23 @@ The following shows a sample usage:
would produce the following (not including other defaults supplied by the
builder and not otherwise conflicting with the qemuargs):
```text
``` text
qemu-system-x86 -m 1024m --no-acpi -netdev user,id=mynet0,hostfwd=hostip:hostport-guestip:guestport -device virtio-net,netdev=mynet0"
```
~> **Windows Users:** [QEMU for Windows](https://qemu.weilnetz.de/) builds are available though an environmental variable does need
~&gt; **Windows Users:** [QEMU for Windows](https://qemu.weilnetz.de/) builds are available though an environmental variable does need
to be set for QEMU for Windows to redirect stdout to the console instead of stdout.txt.
The following shows the environment variable that needs to be set for Windows QEMU support:
```text
``` text
setx SDL_STDIO_REDIRECT=0
```
You can also use the `SSHHostPort` template variable to produce a packer
template that can be invoked by `make` in parallel:
```json
``` json
{
"qemuargs": [
[ "-netdev", "user,hostfwd=tcp::{{ .SSHHostPort }}-:22,id=forward"],
@ -306,6 +305,7 @@ template that can be invoked by `make` in parallel:
]
}
```
`make -j 3 my-awesome-packer-templates` spawns 3 packer processes, each of which
will bind to their own SSH port as determined by each process. This will also
work with WinRM, just change the port forward in `qemuargs` to map to WinRM's
@ -366,10 +366,10 @@ template.
The boot command is "typed" character for character over a VNC connection to the
machine, simulating a human actually typing the keyboard.
-> Keystrokes are typed as separate key up/down events over VNC with a
default 100ms delay. The delay alleviates issues with latency and CPU
contention. For local builds you can tune this delay by specifying
e.g. `PACKER_KEY_INTERVAL=10ms` to speed through the boot command.
-&gt; Keystrokes are typed as separate key up/down events over VNC with a
default 100ms delay. The delay alleviates issues with latency and CPU
contention. For local builds you can tune this delay by specifying
e.g. `PACKER_KEY_INTERVAL=10ms` to speed through the boot command.
There are a set of special keys available. If these are in your boot
command, they will be replaced by the proper key:
@ -418,7 +418,7 @@ command, they will be replaced by the proper key:
sending any additional keys. This is useful if you have to generally wait
for the UI to update before typing more.
- `<waitXX> ` - Add user defined time.Duration pause before sending any
- `<waitXX>` - Add user defined time.Duration pause before sending any
additional keys. For example `<wait10m>` or `<wait1m20s>`
When using modifier keys `ctrl`, `alt`, `shift` ensure that you release them,
@ -438,7 +438,7 @@ available variables are:
Example boot command. This is actually a working boot command used to start an
CentOS 6.4 installer:
```json
``` json
{
"boot_command": [
"<tab><wait>",

View File

@ -1,12 +1,12 @@
---
layout: docs
sidebar_current: docs-builders-triton
page_title: Triton - Builders
description: |-
description: |
The triton Packer builder is able to create new images for use with Triton.
These images can be used with both the Joyent public cloud (which is powered
by Triton) as well with private Triton installations. This builder uses the
Triton Cloud API to create images.
layout: docs
page_title: 'Triton - Builders'
sidebar_current: 'docs-builders-triton'
---
# Triton Builder
@ -30,7 +30,7 @@ This reusable image can then be used to launch new machines.
The builder does *not* manage images. Once it creates an image, it is up to you
to use it or delete it.
~> **Private installations of Triton must have custom images enabled!** To use
~&gt; **Private installations of Triton must have custom images enabled!** To use
the Triton builder with a private/on-prem installation of Joyent's Triton
software, you'll need an operator to manually
[enable custom images](https://docs.joyent.com/private-cloud/install/image-management)
@ -138,7 +138,7 @@ builder.
Below is a minimal example to create an joyent-brand image on the Joyent public
cloud:
```json
``` json
{
"builders": [
{

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-builders-virtualbox-iso
page_title: VirtualBox ISO - Builders
description: |-
description: |
The VirtualBox Packer builder is able to create VirtualBox virtual machines
and export them in the OVF format, starting from an ISO image.
layout: docs
page_title: 'VirtualBox ISO - Builders'
sidebar_current: 'docs-builders-virtualbox-iso'
---
# VirtualBox Builder (from an ISO)
@ -26,7 +26,7 @@ Here is a basic example. This example is not functional. It will start the OS
installer but then fail because we don't provide the preseed file for Ubuntu to
self-install. Still, the example serves to show the basic configuration:
```json
``` json
{
"type": "virtualbox-iso",
"guest_os_type": "Ubuntu_64",
@ -107,7 +107,7 @@ builder.
can be useful for passing product information to include in the resulting
appliance file. Packer JSON configuration file example:
```json
``` json
{
"type": "virtualbox-iso",
"export_opts":
@ -406,7 +406,7 @@ available variables are:
Example boot command. This is actually a working boot command used to start an
Ubuntu 12.04 installer:
```text
``` text
[
"<esc><esc><enter><wait>",
"/install/vmlinuz noapic ",
@ -447,7 +447,7 @@ Extra VBoxManage commands are defined in the template in the `vboxmanage`
section. An example is shown below that sets the memory and number of CPUs
within the virtual machine:
```json
``` json
{
"vboxmanage": [
["modifyvm", "{{.Name}}", "--memory", "1024"],

View File

@ -1,11 +1,11 @@
---
layout: docs
sidebar_current: docs-builders-virtualbox-ovf
page_title: VirtualBox OVF/OVA - Builders
description: |-
description: |
This VirtualBox Packer builder is able to create VirtualBox virtual machines
and export them in the OVF format, starting from an existing OVF/OVA (exported
virtual machine image).
layout: docs
page_title: 'VirtualBox OVF/OVA - Builders'
sidebar_current: 'docs-builders-virtualbox-ovf'
---
# VirtualBox Builder (from an OVF/OVA)
@ -20,13 +20,11 @@ image).
When exporting from VirtualBox make sure to choose OVF Version 2, since Version
1 is not compatible and will generate errors like this:
```
==> virtualbox-ovf: Progress state: VBOX_E_FILE_ERROR
==> virtualbox-ovf: VBoxManage: error: Appliance read failed
==> virtualbox-ovf: VBoxManage: error: Error reading "source.ova": element "Section" has no "type" attribute, line 21
==> virtualbox-ovf: VBoxManage: error: Details: code VBOX_E_FILE_ERROR (0x80bb0004), component Appliance, interface IAppliance
==> virtualbox-ovf: VBoxManage: error: Context: "int handleImportAppliance(HandlerArg*)" at line 304 of file VBoxManageAppliance.cpp
```
==> virtualbox-ovf: Progress state: VBOX_E_FILE_ERROR
==> virtualbox-ovf: VBoxManage: error: Appliance read failed
==> virtualbox-ovf: VBoxManage: error: Error reading "source.ova": element "Section" has no "type" attribute, line 21
==> virtualbox-ovf: VBoxManage: error: Details: code VBOX_E_FILE_ERROR (0x80bb0004), component Appliance, interface IAppliance
==> virtualbox-ovf: VBoxManage: error: Context: "int handleImportAppliance(HandlerArg*)" at line 304 of file VBoxManageAppliance.cpp
The builder builds a virtual machine by importing an existing OVF or OVA file.
It then boots this image, runs provisioners on this new VM, and exports that VM
@ -38,7 +36,7 @@ build.
Here is a basic example. This example is functional if you have an OVF matching
the settings here.
```json
``` json
{
"type": "virtualbox-ovf",
"source_path": "source.ovf",
@ -100,7 +98,7 @@ builder.
can be useful for passing product information to include in the resulting
appliance file. Packer JSON configuration file example:
```json
``` json
{
"type": "virtualbox-ovf",
"export_opts":
@ -358,7 +356,7 @@ available variables are:
Example boot command. This is actually a working boot command used to start an
Ubuntu 12.04 installer:
```text
``` text
[
"<esc><esc><enter><wait>",
"/install/vmlinuz noapic ",
@ -399,7 +397,7 @@ Extra VBoxManage commands are defined in the template in the `vboxmanage`
section. An example is shown below that sets the memory and number of CPUs
within the virtual machine:
```json
``` json
{
"vboxmanage": [
["modifyvm", "{{.Name}}", "--memory", "1024"],

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-builders-virtualbox
page_title: VirtualBox - Builders
description: |-
description: |
The VirtualBox Packer builder is able to create VirtualBox virtual machines
and export them in the OVA or OVF format.
layout: docs
page_title: 'VirtualBox - Builders'
sidebar_current: 'docs-builders-virtualbox'
---
# VirtualBox Builder

View File

@ -1,13 +1,13 @@
---
layout: docs
sidebar_current: docs-builders-vmware-iso
page_title: VMware ISO - Builders
description: |-
description: |
This VMware Packer builder is able to create VMware virtual machines from an
ISO file as a source. It currently supports building virtual machines on hosts
running VMware Fusion for OS X, VMware Workstation for Linux and Windows, and
VMware Player on Linux. It can also build machines directly on VMware vSphere
Hypervisor using SSH as opposed to the vSphere API.
layout: docs
page_title: 'VMware ISO - Builders'
sidebar_current: 'docs-builders-vmware-iso'
---
# VMware Builder (from ISO)
@ -35,7 +35,7 @@ Here is a basic example. This example is not functional. It will start the OS
installer but then fail because we don't provide the preseed file for Ubuntu to
self-install. Still, the example serves to show the basic configuration:
```json
``` json
{
"type": "vmware-iso",
"iso_url": "http://old-releases.ubuntu.com/releases/precise/ubuntu-12.04.2-server-amd64.iso",
@ -317,10 +317,10 @@ template.
The boot command is "typed" character for character over a VNC connection to the
machine, simulating a human actually typing the keyboard.
-> Keystrokes are typed as separate key up/down events over VNC with a
default 100ms delay. The delay alleviates issues with latency and CPU
contention. For local builds you can tune this delay by specifying
e.g. `PACKER_KEY_INTERVAL=10ms` to speed through the boot command.
-&gt; Keystrokes are typed as separate key up/down events over VNC with a
default 100ms delay. The delay alleviates issues with latency and CPU
contention. For local builds you can tune this delay by specifying
e.g. `PACKER_KEY_INTERVAL=10ms` to speed through the boot command.
There are a set of special keys available. If these are in your boot
command, they will be replaced by the proper key:
@ -387,7 +387,7 @@ available variables are:
Example boot command. This is actually a working boot command used to start an
Ubuntu 12.04 installer:
```text
``` text
[
"<esc><esc><enter><wait>",
"/install/vmlinuz noapic ",
@ -410,7 +410,7 @@ file](https://github.com/hashicorp/packer/blob/20541a7eda085aa5cf35bfed5069592ca
But for advanced users, this template can be customized. This allows Packer to
build virtual machines of effectively any guest operating system type.
~> **This is an advanced feature.** Modifying the VMX template can easily
~&gt; **This is an advanced feature.** Modifying the VMX template can easily
cause your virtual machine to not boot properly. Please only modify the template
if you know what you're doing.
@ -431,12 +431,12 @@ In addition to using the desktop products of VMware locally to build virtual
machines, Packer can use a remote VMware Hypervisor to build the virtual
machine.
-> **Note:** Packer supports ESXi 5.1 and above.
-&gt; **Note:** Packer supports ESXi 5.1 and above.
Before using a remote vSphere Hypervisor, you need to enable GuestIPHack by
running the following command:
```text
``` text
esxcli system settings advanced set -o /Net/GuestIPHack -i 1
```
@ -482,7 +482,6 @@ modify as well:
format of the exported virtual machine. This defaults to "ovf".
Before using this option, you need to install `ovftool`.
### VNC port discovery
Packer needs to decide on a port to use for VNC when building remotely. To find
@ -503,7 +502,7 @@ Depending on your network configuration, it may be difficult to use packer's
built-in HTTP server with ESXi. Instead, you can provide a kickstart or preseed
file by attaching a floppy disk. An example below, based on RHEL:
```json
``` json
{
"builders": [
{
@ -519,7 +518,7 @@ file by attaching a floppy disk. An example below, based on RHEL:
It's also worth noting that `ks=floppy` has been deprecated. Later versions of the Anaconda installer (used in RHEL/CentOS 7 and Fedora) may require a different syntax to source a kickstart file from a mounted floppy image.
```json
``` json
{
"builders": [
{

View File

@ -1,12 +1,12 @@
---
layout: docs
sidebar_current: docs-builders-vmware-vmx
page_title: VMware VMX - Builders
description: |-
description: |
This VMware Packer builder is able to create VMware virtual machines from an
existing VMware virtual machine (a VMX file). It currently supports building
virtual machines on hosts running VMware Fusion Professional for OS X, VMware
Workstation for Linux and Windows, and VMware Player on Linux.
layout: docs
page_title: 'VMware VMX - Builders'
sidebar_current: 'docs-builders-vmware-vmx'
---
# VMware Builder (from VMX)
@ -32,7 +32,7 @@ VMware virtual machine.
Here is an example. This example is fully functional as long as the source path
points to a real VMX file with the proper settings:
```json
``` json
{
"type": "vmware-vmx",
"source_path": "/path/to/a/vm.vmx",
@ -193,10 +193,10 @@ template.
The boot command is "typed" character for character over a VNC connection to the
machine, simulating a human actually typing the keyboard.
-> Keystrokes are typed as separate key up/down events over VNC with a
default 100ms delay. The delay alleviates issues with latency and CPU
contention. For local builds you can tune this delay by specifying
e.g. `PACKER_KEY_INTERVAL=10ms` to speed through the boot command.
-&gt; Keystrokes are typed as separate key up/down events over VNC with a
default 100ms delay. The delay alleviates issues with latency and CPU
contention. For local builds you can tune this delay by specifying
e.g. `PACKER_KEY_INTERVAL=10ms` to speed through the boot command.
There are a set of special keys available. If these are in your boot
command, they will be replaced by the proper key:
@ -259,7 +259,7 @@ available variables are:
Example boot command. This is actually a working boot command used to start an
Ubuntu 12.04 installer:
```text
``` text
[
"<esc><esc><enter><wait>",
"/install/vmlinuz noapic ",

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-builders-vmware
page_title: VMware - Builders
description: |-
description: |
The VMware Packer builder is able to create VMware virtual machines for use
with any VMware product.
layout: docs
page_title: 'VMware - Builders'
sidebar_current: 'docs-builders-vmware'
---
# VMware Builder

View File

@ -1,12 +1,12 @@
---
layout: docs
sidebar_current: docs-commands-build
page_title: packer build - Commands
description: |-
description: |
The `packer build` command takes a template and runs all the builds within it
in order to generate a set of artifacts. The various builds specified within a
template are executed in parallel, unless otherwise specified. And the
artifacts that are created will be outputted at the end of the build.
layout: docs
page_title: 'packer build - Commands'
sidebar_current: 'docs-commands-build'
---
# `build` Command

View File

@ -1,12 +1,12 @@
---
layout: docs
sidebar_current: docs-commands-fix
page_title: packer fix - Commands
description: |-
description: |
The `packer fix` command takes a template and finds backwards incompatible
parts of it and brings it up to date so it can be used with the latest version
of Packer. After you update to a new Packer release, you should run the fix
command to make sure your templates work with the new release.
layout: docs
page_title: 'packer fix - Commands'
sidebar_current: 'docs-commands-fix'
---
# `fix` Command
@ -20,7 +20,7 @@ The fix command will output the changed template to standard out, so you should
redirect standard using standard OS-specific techniques if you want to save it
to a file. For example, on Linux systems, you may want to do this:
```shell
``` shell
$ packer fix old.json > new.json
```
@ -28,7 +28,7 @@ If fixing fails for any reason, the fix command will exit with a non-zero exit
status. Error messages appear on standard error, so if you're redirecting
output, you'll still see error messages.
-> **Even when Packer fix doesn't do anything** to the template, the template
-&gt; **Even when Packer fix doesn't do anything** to the template, the template
will be outputted to standard out. Things such as configuration key ordering and
indentation may be changed. The output format however, is pretty-printed for
human readability.

View File

@ -1,13 +1,13 @@
---
layout: docs
sidebar_current: docs-commands
page_title: Commands
description: |-
description: |
Packer is controlled using a command-line interface. All interaction with
Packer is done via the `packer` tool. Like many other command-line tools, the
`packer` tool takes a subcommand to execute, and that subcommand may have
additional options as well. Subcommands are executed with `packer SUBCOMMAND`,
where "SUBCOMMAND" is the actual command you wish to execute.
layout: docs
page_title: Commands
sidebar_current: 'docs-commands'
---
# Packer Commands (CLI)
@ -46,7 +46,7 @@ The machine-readable output format can be enabled by passing the
output to become machine-readable on stdout. Logging, if enabled, continues to
appear on stderr. An example of the output is shown below:
```text
``` text
$ packer -machine-readable version
1376289459,,version,0.2.4
1376289459,,version-prerelease,
@ -58,7 +58,7 @@ The format will be covered in more detail later. But as you can see, the output
immediately becomes machine-friendly. Try some other commands with the
`-machine-readable` flag to see!
~> The `-machine-readable` flag is designed for automated environments and is
~&gt; The `-machine-readable` flag is designed for automated environments and is
mutually-exclusive with the `-debug` flag, which is designed for interactive
environments.
@ -70,7 +70,7 @@ This makes it more convenient to parse using standard Unix tools such as `awk` o
The format is:
```text
``` text
timestamp,target,type,data...
```

View File

@ -1,13 +1,13 @@
---
layout: docs
sidebar_current: docs-commands-inspect
page_title: packer inspect - Commands
description: |-
description: |
The `packer inspect` command takes a template and outputs the various
components a template defines. This can help you quickly learn about a
template without having to dive into the JSON itself. The command will tell
you things like what variables a template accepts, the builders it defines,
the provisioners it defines and the order they'll run, and more.
layout: docs
page_title: 'packer inspect - Commands'
sidebar_current: 'docs-commands-inspect'
---
# `inspect` Command
@ -30,7 +30,7 @@ your template by necessity.
Given a basic template, here is an example of what the output might look like:
```text
``` text
$ packer inspect template.json
Variables and their defaults:

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-commands-push
page_title: packer push - Commands
description: |-
description: |
The `packer push` command uploads a template and other required files to the
Atlas build service, which will run your packer build for you.
layout: docs
page_title: 'packer push - Commands'
sidebar_current: 'docs-commands-push'
---
# `push` Command
@ -23,7 +23,7 @@ artifacts in Atlas. In order to do that you will also need to configure the
[Atlas post-processor](/docs/post-processors/atlas.html). This is optional, and
both the post-processor and push commands can be used independently.
!> The push command uploads your template and other files, like provisioning
!&gt; The push command uploads your template and other files, like provisioning
scripts, to Atlas. Take care not to upload files that you don't intend to, like
secrets or large binaries. **If you have secrets in your Packer template, you
should [move them into environment
@ -68,13 +68,13 @@ configuration using the options below.
Push a Packer template:
```shell
``` shell
$ packer push template.json
```
Push a Packer template with a custom token:
```shell
``` shell
$ packer push -token ABCD1234 template.json
```

View File

@ -1,12 +1,12 @@
---
layout: docs
sidebar_current: docs-commands-validate
page_title: packer validate - Commands
description: |-
description: |
The `packer validate` Packer command is used to validate the syntax and
configuration of a template. The command will return a zero exit status on
success, and a non-zero exit status on failure. Additionally, if a template
doesn't validate, any error messages will be outputted.
layout: docs
page_title: 'packer validate - Commands'
sidebar_current: 'docs-commands-validate'
---
# `validate` Command
@ -19,7 +19,7 @@ be outputted.
Example usage:
```text
``` text
$ packer validate my-template.json
Template validation failed. Errors are shown below.

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-extending-custom-builders
page_title: Custom Builders - Extending
description: |-
description: |
It is possible to write custom builders using the Packer plugin interface, and
this page documents how to do that.
layout: docs
page_title: 'Custom Builders - Extending'
sidebar_current: 'docs-extending-custom-builders'
---
# Custom Builders
@ -19,7 +19,7 @@ plugin interface, and this page documents how to do that.
Prior to reading this page, it is assumed you have read the page on [plugin
development basics](/docs/extending/plugins.html).
~> **Warning!** This is an advanced topic. If you're new to Packer, we
~&gt; **Warning!** This is an advanced topic. If you're new to Packer, we
recommend getting a bit more comfortable before you dive into writing plugins.
## The Interface
@ -29,7 +29,7 @@ interface. It is reproduced below for reference. The actual interface in the
source code contains some basic documentation as well explaining what each
method should do.
```go
``` go
type Builder interface {
Prepare(...interface{}) error
Run(ui Ui, hook Hook, cache Cache) (Artifact, error)
@ -134,14 +134,14 @@ When the machine is ready to be provisioned, run the `packer.HookProvision`
hook, making sure the communicator is not nil, since this is required for
provisioners. An example of calling the hook is shown below:
```go
``` go
hook.Run(packer.HookProvision, ui, comm, nil)
```
At this point, Packer will run the provisioners and no additional work is
necessary.
-> **Note:** Hooks are still undergoing thought around their general design
-&gt; **Note:** Hooks are still undergoing thought around their general design
and will likely change in a future version. They aren't fully "baked" yet, so
they aren't documented here other than to tell you how to hook in provisioners.

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-extending-custom-post-processors
page_title: Custom Post-Processors - Extending
description: |-
description: |
Packer Post-processors are the components of Packer that transform one
artifact into another, for example by compressing files, or uploading them.
layout: docs
page_title: 'Custom Post-Processors - Extending'
sidebar_current: 'docs-extending-custom-post-processors'
---
# Custom Post-Processors
@ -24,7 +24,7 @@ development basics](/docs/extending/plugins.html).
Post-processor plugins implement the `packer.PostProcessor` interface and are
served using the `plugin.ServePostProcessor` function.
~> **Warning!** This is an advanced topic. If you're new to Packer, we
~&gt; **Warning!** This is an advanced topic. If you're new to Packer, we
recommend getting a bit more comfortable before you dive into writing plugins.
## The Interface
@ -34,7 +34,7 @@ The interface that must be implemented for a post-processor is the
actual interface in the source code contains some basic documentation as well
explaining what each method should do.
```go
``` go
type PostProcessor interface {
Configure(interface{}) error
PostProcess(Ui, Artifact) (a Artifact, keep bool, err error)

View File

@ -1,12 +1,12 @@
---
layout: docs
sidebar_current: docs-extending-custom-provisioners
page_title: Custom Provisioners - Extending
description: |-
description: |
Packer Provisioners are the components of Packer that install and configure
software into a running machine prior to turning that machine into an image.
An example of a provisioner is the shell provisioner, which runs shell scripts
within the machines.
layout: docs
page_title: 'Custom Provisioners - Extending'
sidebar_current: 'docs-extending-custom-provisioners'
---
# Custom Provisioners
@ -23,7 +23,7 @@ development basics](/docs/extending/plugins.html).
Provisioner plugins implement the `packer.Provisioner` interface and are served
using the `plugin.ServeProvisioner` function.
~> **Warning!** This is an advanced topic. If you're new to Packer, we
~&gt; **Warning!** This is an advanced topic. If you're new to Packer, we
recommend getting a bit more comfortable before you dive into writing plugins.
## The Interface
@ -33,7 +33,7 @@ The interface that must be implemented for a provisioner is the
actual interface in the source code contains some basic documentation as well
explaining what each method should do.
```go
``` go
type Provisioner interface {
Prepare(...interface{}) error
Provision(Ui, Communicator) error
@ -90,7 +90,7 @@ itself](https://github.com/hashicorp/packer/blob/master/packer/communicator.go)
is really great as an overview of how to use the interface. You should begin by
reading this. Once you have read it, you can see some example usage below:
```go
``` go
// Build the remote command.
var cmd packer.RemoteCmd
cmd.Command = "echo foo"

View File

@ -1,11 +1,11 @@
---
layout: docs
page_title: Extending
sidebar_current: docs-extending
description: |-
description: |
Packer is designed to be extensible. Because the surface area for workloads is
infinite, Packer supports plugins for builders, provisioners, and
post-processors.
layout: docs
page_title: Extending
sidebar_current: 'docs-extending'
---
# Extending Packer

View File

@ -1,11 +1,11 @@
---
layout: docs
sidebar_current: docs-extending-plugins
page_title: Plugins - Extending
description: |-
description: |
Packer Plugins allow new functionality to be added to Packer without modifying
the core source code. Packer plugins are able to add new commands, builders,
provisioners, hooks, and more.
layout: docs
page_title: 'Plugins - Extending'
sidebar_current: 'docs-extending-plugins'
---
# Plugins
@ -80,7 +80,7 @@ assumed that you're familiar with the language. This page will not be a Go
language tutorial. Thankfully, if you are familiar with Go, the Go toolchain
provides many conveniences to help to develop Packer plugins.
~> **Warning!** This is an advanced topic. If you're new to Packer, we
~&gt; **Warning!** This is an advanced topic. If you're new to Packer, we
recommend getting a bit more comfortable before you dive into writing plugins.
### Plugin System Architecture
@ -131,7 +131,7 @@ There are two steps involved in creating a plugin:
A basic example is shown below. In this example, assume the `Builder` struct
implements the `packer.Builder` interface:
```go
``` go
import (
"github.com/hashicorp/packer/packer/plugin"
)
@ -155,7 +155,7 @@ using standard installation procedures.
The specifics of how to implement each type of interface are covered in the
relevant subsections available in the navigation to the left.
~> **Lock your dependencies!** Unfortunately, Go's dependency management
~&gt; **Lock your dependencies!** Unfortunately, Go's dependency management
story is fairly sad. There are various unofficial methods out there for locking
dependencies, and using one of them is highly recommended since the Packer
codebase will continue to improve, potentially breaking APIs along the way until
@ -171,7 +171,7 @@ visible on stderr when the `PACKER_LOG` environmental is set.
Packer will prefix any logs from plugins with the path to that plugin to make it
identifiable where the logs come from. Some example logs are shown below:
```text
``` text
2013/06/10 21:44:43 ui: Available commands are:
2013/06/10 21:44:43 Loading command: build
2013/06/10 21:44:43 packer-command-build: 2013/06/10 21:44:43 Plugin minimum port: 10000
@ -203,7 +203,7 @@ While developing plugins, you can configure your Packer configuration to point
directly to the compiled plugin in order to test it. For example, building the
CustomCloud plugin, I may configure packer like so:
```json
``` json
{
"builders": {
"custom-cloud": "/an/absolute/path/to/packer-builder-custom-cloud"

View File

@ -1,11 +1,11 @@
---
layout: docs
page_title: Documentation
description: |-
description: |
Welcome to the Packer documentation! This documentation is more of a reference
guide for all available features and options in Packer. If you're just getting
started with Packer, please start with the introduction and getting started
guide instead.
layout: docs
page_title: Documentation
---
# Packer Documentation

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-install
page_title: Install
description: |-
description: |
Installing Packer is simple. You can download a precompiled binary or compile
from source. This page details both methods.
layout: docs
page_title: Install
sidebar_current: 'docs-install'
---
# Install Packer
@ -13,7 +13,7 @@ Installing Packer is simple. There are two approaches to installing Packer:
1. Using a [precompiled binary](#precompiled-binaries)
1. Installing [from source](#compiling-from-source)
2. Installing [from source](#compiling-from-source)
Downloading a precompiled binary is easiest, and we provide downloads over TLS
along with SHA256 sums to verify the binary. We also distribute a PGP signature
@ -38,27 +38,27 @@ To compile from source, you will need [Go](https://golang.org) installed and
configured properly (including a `GOPATH` environment variable set), as well
as a copy of [`git`](https://www.git-scm.com/) in your `PATH`.
1. Clone the Packer repository from GitHub into your `GOPATH`:
1. Clone the Packer repository from GitHub into your `GOPATH`:
```shell
``` shell
$ mkdir -p $GOPATH/src/github.com/mitchellh && cd $!
$ git clone https://github.com/mitchellh/packer.git
$ cd packer
```
1. Bootstrap the project. This will download and compile libraries and tools
2. Bootstrap the project. This will download and compile libraries and tools
needed to compile Packer:
```shell
``` shell
$ make bootstrap
```
1. Build Packer for your current system and put the
3. Build Packer for your current system and put the
binary in `./bin/` (relative to the git checkout). The `make dev` target is
just a shortcut that builds `packer` for only your local build environment (no
cross-compiled targets).
```shell
``` shell
$ make dev
```
@ -68,6 +68,6 @@ To verify Packer is properly installed, run `packer -v` on your system. You
should see help output. If you are executing it from the command line, make sure
it is on your PATH or you may get an error about Packer not being found.
```shell
``` shell
$ packer -v
```

View File

@ -1,12 +1,12 @@
---
layout: docs
sidebar_current: docs-other-core-configuration
page_title: Core Configuration - Other
description: |-
description: |
There are a few configuration settings that affect Packer globally by
configuring the core of Packer. These settings all have reasonable defaults,
so you generally don't have to worry about it until you want to tweak a
configuration.
layout: docs
page_title: 'Core Configuration - Other'
sidebar_current: 'docs-other-core-configuration'
---
# Core Configuration

View File

@ -1,11 +1,11 @@
---
layout: docs
sidebar_current: docs-other-debugging
page_title: Debugging - Other
description: |-
description: |
Packer strives to be stable and bug-free, but issues inevitably arise where
certain things may not work entirely correctly, or may not appear to work
correctly.
layout: docs
page_title: 'Debugging - Other'
sidebar_current: 'docs-other-debugging'
---
# Debugging Packer Builds
@ -66,7 +66,7 @@ In Windows you can set the detailed logs environmental variable `PACKER_LOG` or
the log variable `PACKER_LOG_PATH` using powershell environment variables. For
example:
```powershell
``` powershell
$env:PACKER_LOG=1
$env:PACKER_LOG_PATH="packerlog.txt"
```
@ -80,10 +80,8 @@ Issues may arise using and building Ubuntu AMIs where common packages that
*should* be installed from Ubuntu's Main repository are not found during a
provisioner step:
```
amazon-ebs: No candidate version found for build-essential
amazon-ebs: No candidate version found for build-essential
```
amazon-ebs: No candidate version found for build-essential
amazon-ebs: No candidate version found for build-essential
This, obviously can cause problems where a build is unable to finish
successfully as the proper packages cannot be provisioned correctly. The problem
@ -94,7 +92,7 @@ Adding the following provisioner to the packer template, allows for the
cloud-init process to fully finish before packer starts provisioning the source
AMI.
```json
``` json
{
"type": "shell",
"inline": [
@ -103,7 +101,6 @@ AMI.
}
```
## Issues when using numerous Builders/Provisioners/Post-Processors
Packer uses a separate process for each builder, provisioner, post-processor,
@ -111,13 +108,12 @@ and plugin. In certain cases, if you have too many of these, you can run out of
[file descriptors](https://en.wikipedia.org/wiki/File_descriptor). This results
in an error that might look like
```text
``` text
error initializing provisioner 'powershell': fork/exec /files/go/bin/packer:
too many open files
```
On Unix systems, you can check what your file descriptor limit is with `ulimit
-Sn`. You should check with your OS vendor on how to raise this limit.
On Unix systems, you can check what your file descriptor limit is with `ulimit -Sn`. You should check with your OS vendor on how to raise this limit.
## Issues when using long temp directory
@ -126,7 +122,7 @@ directory for temporary files. Some operating systems place a limit on the
length of the socket name, usually between 80 and 110 characters. If you get an
error like this (for any builder, not just docker):
```text
``` text
Failed to initialize build 'docker': error initializing builder 'docker': plugin exited before we could connect
```

View File

@ -1,9 +1,8 @@
---
description: 'Packer uses a variety of environmental variables.'
layout: docs
sidebar_current: docs-other-environment-variables
page_title: Environment Variables - Other
description: |-
Packer uses a variety of environmental variables.
page_title: 'Environment Variables - Other'
sidebar_current: 'docs-other-environment-variables'
---
# Environment Variables for Packer

View File

@ -4,7 +4,7 @@ description: |
various builders and imports it to an Alicloud customized image list.
layout: docs
page_title: 'Alicloud Import Post-Processor'
...
---
# Aicloud Import Post-Processor
@ -38,10 +38,10 @@ two categories: required and optional parameters.
- `region` (string) - This is the Alicloud region. It must be provided, but it
can also be sourced from the `ALICLOUD_REGION` environment variables.
- `image_name` (string) - The name of the user-defined image, [2, 128] English
- `image_name` (string) - The name of the user-defined image, \[2, 128\] English
or Chinese characters. It must begin with an uppercase/lowercase letter or
a Chinese character, and may contain numbers, `_` or `-`. It cannot begin
with http:// or https://.
with <http://> or <https://>.
- `oss_bucket_name` (string) - The name of the OSS bucket where the RAW or VHD
file will be copied to for import. If the Bucket isn't exist, post-process
@ -52,7 +52,7 @@ two categories: required and optional parameters.
- `image_platform` (string) - platform such `CentOS`
- `image_architecture` (string) - Platform type of the image system:i386
| x86_64
| x86\_64
- `format` (string) - The format of the image for import, now alicloud only
support RAW and VHD.
@ -69,7 +69,7 @@ two categories: required and optional parameters.
- `image_description` (string) - The description of the image, with a length
limit of 0 to 256 characters. Leaving it blank means null, which is the
default value. It cannot begin with http:// or https://.
default value. It cannot begin with <http://> or <https://>.
- `image_force_delete` (bool) - If this value is true, when the target image
name is duplicated with an existing image, it will delete the existing image
@ -78,8 +78,8 @@ two categories: required and optional parameters.
- `image_system_size` (int) - Size of the system disk, in GB, values range:
- cloud - 5 ~ 2000
- cloud_efficiency - 20 ~ 2048
- cloud_ssd - 20 ~ 2048
- cloud\_efficiency - 20 ~ 2048
- cloud\_ssd - 20 ~ 2048
## Basic Example
@ -89,7 +89,7 @@ artifact. The user must have the role `AliyunECSImageImportDefaultRole` with
role and policy for you if you have the privilege, otherwise, you have to ask
the administrator configure for you in advance.
```json
``` json
"post-processors":[
{
"access_key":"{{user `access_key`}}",

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-post-processors-amazon-import
page_title: Amazon Import - Post-Processors
description: |-
description: |
The Packer Amazon Import post-processor takes an OVA artifact from various
builders and imports it to an AMI available to Amazon Web Services EC2.
layout: docs
page_title: 'Amazon Import - Post-Processors'
sidebar_current: 'docs-post-processors-amazon-import'
---
# Amazon Import Post-Processor
@ -13,7 +13,7 @@ Type: `amazon-import`
The Packer Amazon Import post-processor takes an OVA artifact from various builders and imports it to an AMI available to Amazon Web Services EC2.
~> This post-processor is for advanced users. It depends on specific IAM roles inside AWS and is best used with images that operate with the EC2 configuration model (eg, cloud-init for Linux systems). Please ensure you read the [prerequisites for import](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/VMImportPrerequisites.html) before using this post-processor.
~&gt; This post-processor is for advanced users. It depends on specific IAM roles inside AWS and is best used with images that operate with the EC2 configuration model (eg, cloud-init for Linux systems). Please ensure you read the [prerequisites for import](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/VMImportPrerequisites.html) before using this post-processor.
## How Does it Work?
@ -90,7 +90,7 @@ Optional:
Here is a basic example. This assumes that the builder has produced an OVA artifact for us to work with, and IAM roles for import exist in the AWS account being imported into.
```json
``` json
{
"type": "amazon-import",
"access_key": "YOUR KEY HERE",
@ -104,7 +104,7 @@ Here is a basic example. This assumes that the builder has produced an OVA artif
}
```
-> **Note:** Packer can also read the access key and secret access key from
-&gt; **Note:** Packer can also read the access key and secret access key from
environmental variables. See the configuration reference in the section above
for more information on what environmental variables Packer will look for.

View File

@ -1,14 +1,14 @@
---
layout: docs
sidebar_current: docs-post-processors-artifice
page_title: Artifice - Post-Processors
description: |-
description: |
The artifice post-processor overrides the artifact list from an upstream
builder or post-processor. All downstream post-processors will see the new
artifacts you specify. The primary use-case is to build artifacts inside a
packer builder -- for example, spinning up an EC2 instance to build a docker
container -- and then extracting the docker container and throwing away the
EC2 instance.
layout: docs
page_title: 'Artifice - Post-Processors'
sidebar_current: 'docs-post-processors-artifice'
---
# Artifice Post-Processor
@ -65,15 +65,15 @@ The configuration allows you to specify which files comprise your artifact.
This minimal example:
1. Spins up a cloned VMware virtual machine
1. Installs a [consul](https://www.consul.io/) release
1. Downloads the consul binary
1. Packages it into a `.tar.gz` file
1. Uploads it to Atlas.
2. Installs a [consul](https://www.consul.io/) release
3. Downloads the consul binary
4. Packages it into a `.tar.gz` file
5. Uploads it to Atlas.
VMX is a fast way to build and test locally, but you can easily substitute
another builder.
```json
``` json
{
"builders": [
{
@ -128,7 +128,7 @@ proceeding artifact is passed to subsequent post-processors. If you use only one
set of square braces the post-processors will run individually against the build
artifact (the vmx file in this case) and it will not have the desired result.
```json
``` json
{
"post-processors": [
[ // <--- Start post-processor chain

View File

@ -1,11 +1,11 @@
---
layout: docs
sidebar_current: docs-post-processors-atlas
page_title: Atlas - Post-Processor
description: |-
description: |
The Atlas post-processor for Packer receives an artifact from a Packer build
and uploads it to Atlas. Atlas hosts and serves artifacts, allowing you to
version and distribute them in a simple way.
layout: docs
page_title: 'Atlas - Post-Processor'
sidebar_current: 'docs-post-processors-atlas'
---
# Atlas Post-Processor
@ -22,7 +22,7 @@ You can also use the push command to [run packer builds in
Atlas](/docs/commands/push.html). The push command and Atlas post-processor
can be used together or independently.
~> If you'd like to publish a Vagrant box to [Vagrant Cloud](https://vagrantcloud.com), you must use the [`vagrant-cloud`](/docs/post-processors/vagrant-cloud.html) post-processor.
~&gt; If you'd like to publish a Vagrant box to [Vagrant Cloud](https://vagrantcloud.com), you must use the [`vagrant-cloud`](/docs/post-processors/vagrant-cloud.html) post-processor.
## Workflow
@ -36,11 +36,11 @@ Here is an example workflow:
1. Packer builds an AMI with the [Amazon AMI
builder](/docs/builders/amazon.html)
1. The `atlas` post-processor takes the resulting AMI and uploads it to Atlas.
2. The `atlas` post-processor takes the resulting AMI and uploads it to Atlas.
The `atlas` post-processor is configured with the name of the AMI, for
example `hashicorp/foobar`, to create the artifact in Atlas or update the
version if the artifact already exists
1. The new version is ready and available to be used in deployments with a
3. The new version is ready and available to be used in deployments with a
tool like [Terraform](https://www.terraform.io)
## Configuration
@ -66,7 +66,7 @@ The configuration allows you to specify and access the artifact in Atlas.
- `token` (string) - Your access token for the Atlas API.
-> Login to Atlas to [generate an Atlas
-&gt; Login to Atlas to [generate an Atlas
Token](https://atlas.hashicorp.com/settings/tokens). The most convenient way to
configure your token is to set it to the `ATLAS_TOKEN` environment variable, but
you can also use `token` configuration option.
@ -95,7 +95,7 @@ you can also use `token` configuration option.
### Example Configuration
```json
``` json
{
"variables": {
"aws_access_key": "ACCESS_KEY_HERE",

View File

@ -1,14 +1,14 @@
---
layout: docs
sidebar_current: docs-post-processors-checksum
page_title: Checksum - Post-Processors
description: |-
description: |
The checksum post-processor computes specified checksum for the artifact list
from an upstream builder or post-processor. All downstream post-processors
will see the new artifacts. The primary use-case is compute checksum for
artifacts allows to verify it later. So firstly this post-processor get
artifact, compute it checksum and pass to next post-processor original
artifacts and checksum files.
layout: docs
page_title: 'Checksum - Post-Processors'
sidebar_current: 'docs-post-processors-checksum'
---
# Checksum Post-Processor
@ -32,7 +32,7 @@ post-processor.
The example below is fully functional.
```json
``` json
{
"type": "checksum"
}
@ -43,14 +43,14 @@ The example below is fully functional.
Optional parameters:
- `checksum_types` (array of strings) - An array of strings of checksum types
to compute. Allowed values are md5, sha1, sha224, sha256, sha384, sha512.
to compute. Allowed values are md5, sha1, sha224, sha256, sha384, sha512.
- `output` (string) - Specify filename to store checksums. This defaults to
`packer_{{.BuildName}}_{{.BuilderType}}_{{.ChecksumType}}.checksum`. For
example, if you had a builder named `database`, you might see the file
written as `packer_database_docker_md5.checksum`. The following variables are
available to use in the output template:
* `BuildName`: The name of the builder that produced the artifact.
* `BuilderType`: The type of builder used to produce the artifact.
* `ChecksumType`: The type of checksums the file contains. This should be
- `BuildName`: The name of the builder that produced the artifact.
- `BuilderType`: The type of builder used to produce the artifact.
- `ChecksumType`: The type of checksums the file contains. This should be
used if you have more than one value in `checksum_types`.

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-post-processors-compress
page_title: Compress - Post-Processors
description: |-
description: |
The Packer compress post-processor takes an artifact with files (such as from
VMware or VirtualBox) and compresses the artifact into a single archive.
layout: docs
page_title: 'Compress - Post-Processors'
sidebar_current: 'docs-post-processors-compress'
---
# Compress Post-Processor
@ -52,21 +52,21 @@ compress.
Some minimal examples are shown below, showing only the post-processor
configuration:
```json
``` json
{
"type": "compress",
"output": "archive.tar.lz4"
}
```
```json
``` json
{
"type": "compress",
"output": "{{.BuildName}}_bundle.zip"
}
```
```json
``` json
{
"type": "compress",
"output": "log_{{.BuildName}}.gz",

View File

@ -1,12 +1,12 @@
---
layout: docs
sidebar_current: docs-post-processors-docker-import
page_title: Docker Import - Post-Processors
description: |-
description: |
The Packer Docker import post-processor takes an artifact from the docker
builder and imports it with Docker locally. This allows you to apply a
repository and tag to the image and lets you use the other Docker
post-processors such as docker-push to push the image to a registry.
layout: docs
page_title: 'Docker Import - Post-Processors'
sidebar_current: 'docs-post-processors-docker-import'
---
# Docker Import Post-Processor
@ -33,7 +33,7 @@ is optional.
An example is shown below, showing only the post-processor configuration:
```json
``` json
{
"type": "docker-import",
"repository": "mitchellh/packer",

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-post-processors-docker-push
page_title: Docker Push - Post-Processors
description: |-
description: |
The Packer Docker push post-processor takes an artifact from the docker-import
post-processor and pushes it to a Docker registry.
layout: docs
page_title: 'Docker Push - Post-Processors'
sidebar_current: 'docs-post-processors-docker-push'
---
# Docker Push Post-Processor
@ -48,12 +48,12 @@ This post-processor has only optional configuration:
- `login_server` (string) - The server address to login to.
Note: When using _Docker Hub_ or _Quay_ registry servers, `login` must to be
Note: When using *Docker Hub* or *Quay* registry servers, `login` must to be
set to `true` and `login_email`, `login_username`, **and** `login_password`
must to be set to your registry credentials. When using Docker Hub,
`login_server` can be omitted.
-> **Note:** If you login using the credentials above, the post-processor
-&gt; **Note:** If you login using the credentials above, the post-processor
will automatically log you out afterwards (just the server specified).
## Example

View File

@ -1,12 +1,12 @@
---
layout: docs
sidebar_current: docs-post-processors-docker-save
page_title: Docker Save - Post-Processors
description: |-
description: |
The Packer Docker Save post-processor takes an artifact from the docker
builder that was committed and saves it to a file. This is similar to
exporting the Docker image directly from the builder, except that it preserves
the hierarchy of images and metadata.
layout: docs
page_title: 'Docker Save - Post-Processors'
sidebar_current: 'docs-post-processors-docker-save'
---
# Docker Save Post-Processor
@ -32,7 +32,7 @@ The configuration for this post-processor only requires one option.
An example is shown below, showing only the post-processor configuration:
```json
``` json
{
"type": "docker-save",
"path": "foo.tar"

View File

@ -1,12 +1,12 @@
---
layout: docs
sidebar_current: docs-post-processors-docker-tag
page_title: Docker Tag - Post-Processors
description: |-
description: |
The Packer Docker Tag post-processor takes an artifact from the docker builder
that was committed and tags it into a repository. This allows you to use the
other Docker post-processors such as docker-push to push the image to a
registry.
layout: docs
page_title: 'Docker Tag - Post-Processors'
sidebar_current: 'docs-post-processors-docker-tag'
---
# Docker Tag Post-Processor
@ -41,7 +41,7 @@ are optional.
An example is shown below, showing only the post-processor configuration:
```json
``` json
{
"type": "docker-tag",
"repository": "mitchellh/packer",

View File

@ -1,12 +1,12 @@
---
layout: docs
sidebar_current: docs-post-processors-googlecompute-export
page_title: Google Compute Image Exporter - Post-Processors
description: |-
description: |
The Google Compute Image Exporter post-processor exports an image from a
Packer googlecompute builder run and uploads it to Google Cloud Storage. The
exported images can be easily shared and uploaded to other Google Cloud
Projects.
layout: docs
page_title: 'Google Compute Image Exporter - Post-Processors'
sidebar_current: 'docs-post-processors-googlecompute-export'
---
# Google Compute Image Exporter Post-Processor
@ -25,7 +25,6 @@ to the provided GCS `paths` using the same credentials.
As such, the authentication credentials that built the image must have write
permissions to the GCS `paths`.
## Configuration
### Required
@ -50,7 +49,7 @@ In order for this example to work, the account associated with `account.json` mu
have write access to both `gs://mybucket1/path/to/file1.tar.gz` and
`gs://mybucket2/path/to/file2.tar.gz`.
```json
``` json
{
"builders": [
{

View File

@ -1,10 +1,10 @@
---
layout: docs
page_title: Post-Processors
sidebar_current: docs-post-processors
description: |-
description: |
Post-processors run after the image is built by the builder and provisioned by
the provisioner(s).
layout: docs
page_title: 'Post-Processors'
sidebar_current: 'docs-post-processors'
---
# Post-Processors

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-post-processors-manifest
page_title: Manifest - Post-Processors
description: |-
description: |
The manifest post-processor writes a JSON file with the build artifacts and
IDs from a packer run.
layout: docs
page_title: 'Manifest - Post-Processors'
sidebar_current: 'docs-post-processors-manifest'
---
# Manifest Post-Processor
@ -30,7 +30,7 @@ You can specify manifest more than once and write each build to its own file, or
You can simply add `{"type":"manifest"}` to your post-processor section. Below is a more verbose example:
```json
``` json
{
"post-processors": [
{

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-post-processors-shell-local
page_title: Local Shell - Post-Processors
description: |-
description: |
The shell-local Packer post processor enables users to do some post processing
after artifacts have been built.
layout: docs
page_title: 'Local Shell - Post-Processors'
sidebar_current: 'docs-post-processors-shell-local'
---
# Local Shell Post Processor
@ -19,7 +19,7 @@ some task with the packer outputs.
The example below is fully functional.
```json
``` json
{
"type": "shell-local",
"inline": ["echo foo"]
@ -112,7 +112,7 @@ of files produced by a `builder` to a json file after each `builder` is run.
For example, if you wanted to package a file from the file builder into
a tarball, you might wright this:
```json
``` json
{
"builders": [
{

View File

@ -1,12 +1,12 @@
---
layout: docs
sidebar_current: docs-post-processors-vagrant-cloud
page_title: Vagrant Cloud - Post-Processors
description: |-
description: |
The Packer Vagrant Cloud post-processor receives a Vagrant box from the
`vagrant` post-processor and pushes it to Vagrant Cloud. Vagrant Cloud hosts
and serves boxes to Vagrant, allowing you to version and distribute boxes to
an organization in a simple way.
layout: docs
page_title: 'Vagrant Cloud - Post-Processors'
sidebar_current: 'docs-post-processors-vagrant-cloud'
---
# Vagrant Cloud Post-Processor
@ -34,15 +34,15 @@ and deliver them to your team in some fashion.
Here is an example workflow:
1. You use Packer to build a Vagrant Box for the `virtualbox` provider
1. The `vagrant-cloud` post-processor is configured to point to the box
2. The `vagrant-cloud` post-processor is configured to point to the box
`hashicorp/foobar` on Vagrant Cloud via the `box_tag` configuration
1. The post-processor receives the box from the `vagrant` post-processor
1. It then creates the configured version, or verifies the existence of it, on
3. The post-processor receives the box from the `vagrant` post-processor
4. It then creates the configured version, or verifies the existence of it, on
Vagrant Cloud
1. A provider matching the name of the Vagrant provider is then created
1. The box is uploaded to Vagrant Cloud
1. The upload is verified
1. The version is released and available to users of the box
5. A provider matching the name of the Vagrant provider is then created
6. The box is uploaded to Vagrant Cloud
7. The upload is verified
8. The version is released and available to users of the box
## Configuration
@ -90,7 +90,7 @@ An example configuration is below. Note the use of a doubly-nested array, which
ensures that the Vagrant Cloud post-processor is run after the Vagrant
post-processor.
```json
``` json
{
"variables": {
"cloud_token": "{{ env `ATLAS_TOKEN` }}",

View File

@ -1,12 +1,12 @@
---
layout: docs
sidebar_current: docs-post-processors-vagrant-box
page_title: Vagrant - Post-Processors
description: |-
description: |
The Packer Vagrant post-processor takes a build and converts the artifact into
a valid Vagrant box, if it can. This lets you use Packer to automatically
create arbitrarily complex Vagrant boxes, and is in fact how the official
boxes distributed by Vagrant are created.
layout: docs
page_title: 'Vagrant - Post-Processors'
sidebar_current: 'docs-post-processors-vagrant-box'
---
# Vagrant Post-Processor
@ -38,7 +38,7 @@ providers.
- VirtualBox
- VMware
-> **Support for additional providers** is planned. If the Vagrant
-&gt; **Support for additional providers** is planned. If the Vagrant
post-processor doesn't support creating boxes for a provider you care about,
please help by contributing to Packer and adding support for it.
@ -85,7 +85,7 @@ post-processor lets you do this.
Specify overrides within the `override` configuration by provider name:
```json
``` json
{
"type": "vagrant",
"compression_level": 1,

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-post-processors-vsphere
page_title: vSphere - Post-Processors
description: |-
description: |
The Packer vSphere post-processor takes an artifact from the VMware builder
and uploads it to a vSphere endpoint.
layout: docs
page_title: 'vSphere - Post-Processors'
sidebar_current: 'docs-post-processors-vsphere'
---
# vSphere Post-Processor
@ -60,5 +60,4 @@ Optional:
- `overwrite` (boolean) - If it's true force the system to overwrite the
existing files instead create new ones. Default is false
- `options` (array of strings) - Custom options to add in ovftool. See `ovftool
--help` to list all the options
- `options` (array of strings) - Custom options to add in ovftool. See `ovftool --help` to list all the options

View File

@ -1,11 +1,11 @@
---
layout: docs
sidebar_current: docs-provisioners-ansible-local
page_title: Ansible Local - Provisioners
description: |-
description: |
The ansible-local Packer provisioner configures Ansible to run on the
machine by Packer from local Playbook and Role files. Playbooks and Roles can
be uploaded from your local machine to the remote machine.
layout: docs
page_title: 'Ansible Local - Provisioners'
sidebar_current: 'docs-provisioners-ansible-local'
---
# Ansible Local Provisioner
@ -18,7 +18,7 @@ uploaded from your local machine to the remote machine. Ansible is run in [local
mode](https://docs.ansible.com/ansible/playbooks_delegation.html#local-playbooks) via the
`ansible-playbook` command.
-> **Note:** Ansible will *not* be installed automatically by this
-&gt; **Note:** Ansible will *not* be installed automatically by this
provisioner. This provisioner expects that Ansible is already installed on the
machine. It is common practice to use the [shell
provisioner](/docs/provisioners/shell.html) before the Ansible provisioner to do
@ -28,7 +28,7 @@ this.
The example below is fully functional.
```json
``` json
{
"type": "ansible-local",
"playbook_file": "local.yml"
@ -48,24 +48,23 @@ Required:
Optional:
- `command` (string) - The command to invoke ansible. Defaults
to "ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ansible-playbook".
to "ANSIBLE\_FORCE\_COLOR=1 PYTHONUNBUFFERED=1 ansible-playbook".
Note, This disregards the value of `-color` when passed to `packer build`.
To disable colors, set this to `PYTHONUNBUFFERED=1 ansible-playbook`.
- `extra_arguments` (array of strings) - An array of extra arguments to pass
to the ansible command. By default, this is empty. These arguments _will_
to the ansible command. By default, this is empty. These arguments *will*
be passed through a shell and arguments should be quoted accordingly.
Usage example:
```
"extra_arguments": [ "--extra-vars \"Region={{user `Region`}} Stage={{user `Stage`}}\"" ]
```
<!-- -->
"extra_arguments": [ "--extra-vars \"Region={{user `Region`}} Stage={{user `Stage`}}\"" ]
- `inventory_groups` (string) - A comma-separated list of groups to which
packer will assign the host `127.0.0.1`. A value of `my_group_1,my_group_2`
will generate an Ansible inventory like:
```text
``` text
[my_group_1]
127.0.0.1
[my_group_2]
@ -82,7 +81,7 @@ specified host you're buiding. The `--limit` argument can be provided in the
An example inventory file may look like:
```text
``` text
[chi-dbservers]
db-01 ansible_connection=local
db-02 ansible_connection=local

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-provisioners-ansible-remote
page_title: Ansible - Provisioners
description: |-
description: |
The ansible Packer provisioner allows Ansible playbooks to be run to
provision the machine.
layout: docs
page_title: 'Ansible - Provisioners'
sidebar_current: 'docs-provisioners-ansible-remote'
---
# Ansible Provisioner
@ -23,7 +23,7 @@ given in the json config.
This is a fully functional template that will provision an image on
DigitalOcean. Replace the mock `api_token` value with your own.
```json
``` json
{
"provisioners": [
{
@ -55,7 +55,7 @@ Optional Parameters:
running Ansible.
Usage example:
```json
``` json
{
"ansible_env_vars": [ "ANSIBLE_HOST_KEY_CHECKING=False", "ANSIBLE_SSH_ARGS='-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s'", "ANSIBLE_NOCOLOR=True" ]
}
@ -68,10 +68,10 @@ Optional Parameters:
inventory file but remain empty.
- `extra_arguments` (array of strings) - Extra arguments to pass to Ansible.
These arguments _will not_ be passed through a shell and arguments should
These arguments *will not* be passed through a shell and arguments should
not be quoted. Usage example:
```json
``` json
{
"extra_arguments": [ "--extra-vars", "Region={{user `Region`}} Stage={{user `Stage`}}" ]
}
@ -110,7 +110,7 @@ Optional Parameters:
- `ssh_host_key_file` (string) - The SSH key that will be used to run the SSH
server on the host machine to forward commands to the target machine. Ansible
connects to this server and will validate the identity of the server using
the system known_hosts. The default behavior is to generate and use a
the system known\_hosts. The default behavior is to generate and use a
onetime key. Host key checking is disabled via the
`ANSIBLE_HOST_KEY_CHECKING` environment variable if the key is generated.
@ -142,7 +142,7 @@ commonly useful Ansible variables:
Redhat / CentOS builds have been known to fail with the following error due to `sftp_command`, which should be set to `/usr/libexec/openssh/sftp-server -e`:
```text
``` text
==> virtualbox-ovf: starting sftp subsystem
virtualbox-ovf: fatal: [default]: UNREACHABLE! => {"changed": false, "msg": "SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh", "unreachable": true}
```
@ -151,7 +151,7 @@ Redhat / CentOS builds have been known to fail with the following error due to `
Building within a chroot (e.g. `amazon-chroot`) requires changing the Ansible connection to chroot.
```json
``` json
{
"builders": [
{
@ -178,7 +178,7 @@ Building within a chroot (e.g. `amazon-chroot`) requires changing the Ansible co
Windows builds require a custom Ansible connection plugin and a particular configuration. Assuming a directory named `connection_plugins` is next to the playbook and contains a file named `packer.py` whose contents is
```python
``` python
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
@ -199,7 +199,7 @@ class Connection(SSHConnection):
This template should build a Windows Server 2012 image on Google Cloud Platform:
```json
``` json
{
"variables": {},
"provisioners": [
@ -230,3 +230,4 @@ This template should build a Windows Server 2012 image on Google Cloud Platform:
}
]
}
```

View File

@ -1,11 +1,11 @@
---
layout: docs
sidebar_current: docs-provisioners-chef-client
page_title: Chef Client - Provisioners
description: |-
description: |
The chef-client Packer provisioner installs and configures software on
machines built by Packer using chef-client. Packer configures a Chef client to
talk to a remote Chef Server to provision the machine.
layout: docs
page_title: 'Chef Client - Provisioners'
sidebar_current: 'docs-provisioners-chef-client'
---
# Chef Client Provisioner
@ -25,7 +25,7 @@ installed, using the official Chef installers provided by Chef.
The example below is fully functional. It will install Chef onto the remote
machine and run Chef client.
```json
``` json
{
"type": "chef-client",
"server_url": "https://mychefserver.com/"
@ -82,7 +82,7 @@ configuration is actually required.
- `prevent_sudo` (boolean) - By default, the configured commands that are
executed to install and run Chef are executed with `sudo`. If this is true,
then the sudo will be omitted. This has no effect when guest_os_type is
then the sudo will be omitted. This has no effect when guest\_os\_type is
windows.
- `run_list` (array of strings) - The [run
@ -107,7 +107,7 @@ configuration is actually required.
- `staging_directory` (string) - This is the directory where all the
configuration of Chef by Packer will be placed. By default this is
"/tmp/packer-chef-client" when guest_os_type unix and
"/tmp/packer-chef-client" when guest\_os\_type unix and
"$env:TEMP/packer-chef-client" when windows. This directory doesn't need to
exist but must have proper permissions so that the user that Packer uses is
able to create directories and write into this folder. By default the
@ -135,7 +135,7 @@ template if you'd like to set custom configurations.
The default value for the configuration template is:
```liquid
``` liquid
log_level :info
log_location STDOUT
chef_server_url "{{.ServerUrl}}"
@ -178,18 +178,18 @@ variables available to use:
By default, Packer uses the following command (broken across multiple lines for
readability) to execute Chef:
```liquid
``` liquid
{{if .Sudo}}sudo {{end}}chef-client \
--no-color \
-c {{.ConfigPath}} \
-j {{.JsonPath}}
```
When guest_os_type is set to "windows", Packer uses the following command to
When guest\_os\_type is set to "windows", Packer uses the following command to
execute Chef. The full path to Chef is required because the PATH environment
variable changes don't immediately propogate to running processes.
```liquid
``` liquid
c:/opscode/chef/bin/chef-client.bat \
--no-color \
-c {{.ConfigPath}} \
@ -211,15 +211,15 @@ By default, Packer uses the following command (broken across multiple lines for
readability) to install Chef. This command can be customized if you want to
install Chef in another way.
```text
``` text
curl -L https://www.chef.io/chef/install.sh | \
{{if .Sudo}}sudo{{end}} bash
```
When guest_os_type is set to "windows", Packer uses the following command to
When guest\_os\_type is set to "windows", Packer uses the following command to
install the latest version of Chef:
```text
``` text
powershell.exe -Command "(New-Object System.Net.WebClient).DownloadFile('http://chef.io/chef/install.msi', 'C:\\Windows\\Temp\\chef.msi');Start-Process 'msiexec' -ArgumentList '/qb /i C:\\Windows\\Temp\\chef.msi' -NoNewWindow -Wait"
```
@ -230,17 +230,17 @@ This command can be customized using the `install_command` configuration.
By default, Packer uses the following command (broken across multiple lines for
readability) to execute Chef:
```liquid
``` liquid
{{if .Sudo}}sudo {{end}}knife \
{{.Args}} \
{{.Flags}}
```
When guest_os_type is set to "windows", Packer uses the following command to
When guest\_os\_type is set to "windows", Packer uses the following command to
execute Chef. The full path to Chef is required because the PATH environment
variable changes don't immediately propogate to running processes.
```liquid
``` liquid
c:/opscode/chef/bin/knife.bat \
{{.Args}} \
{{.Flags}}
@ -272,19 +272,17 @@ mode, while passing a `run_list` using a variable.
**Local environment variables**
```
# Machines Chef directory
export PACKER_CHEF_DIR=/var/chef-packer
# Comma separated run_list
export PACKER_CHEF_RUN_LIST="recipe[apt],recipe[nginx]"
```
# Machines Chef directory
export PACKER_CHEF_DIR=/var/chef-packer
# Comma separated run_list
export PACKER_CHEF_RUN_LIST="recipe[apt],recipe[nginx]"
**Packer variables**
Set the necessary Packer variables using environment variables or provide a [var
file](/docs/templates/user-variables.html).
```json
``` json
"variables": {
"chef_dir": "{{env `PACKER_CHEF_DIR`}}",
"chef_run_list": "{{env `PACKER_CHEF_RUN_LIST`}}",
@ -301,7 +299,7 @@ Make sure we have the correct directories and permissions for the `chef-client`
provisioner. You will need to bootstrap the Chef run by providing the necessary
cookbooks using Berkshelf or some other means.
```json
``` json
{
"type": "file",
"source": "{{user `packer_chef_bootstrap_dir`}}",

View File

@ -1,11 +1,11 @@
---
layout: docs
sidebar_current: docs-provisioners-chef-solo
page_title: Chef Solo - Provisioners
description: |-
description: |
The chef-solo Packer provisioner installs and configures software on machines
built by Packer using chef-solo. Cookbooks can be uploaded from your local
machine to the remote machine or remote paths can be used.
layout: docs
page_title: 'Chef Solo - Provisioners'
sidebar_current: 'docs-provisioners-chef-solo'
---
# Chef Solo Provisioner
@ -25,7 +25,7 @@ installed, using the official Chef installers provided by Chef Inc.
The example below is fully functional and expects cookbooks in the "cookbooks"
directory relative to your working directory.
```json
``` json
{
"type": "chef-solo",
"cookbook_paths": ["cookbooks"]
@ -83,7 +83,7 @@ configuration is actually required, but at least `run_list` is recommended.
- `prevent_sudo` (boolean) - By default, the configured commands that are
executed to install and run Chef are executed with `sudo`. If this is true,
then the sudo will be omitted. This has no effect when guest_os_type is
then the sudo will be omitted. This has no effect when guest\_os\_type is
windows.
- `remote_cookbook_paths` (array of strings) - A list of paths on the remote
@ -104,7 +104,7 @@ configuration is actually required, but at least `run_list` is recommended.
- `staging_directory` (string) - This is the directory where all the
configuration of Chef by Packer will be placed. By default this is
"/tmp/packer-chef-solo" when guest_os_type unix and
"/tmp/packer-chef-solo" when guest\_os\_type unix and
"$env:TEMP/packer-chef-solo" when windows. This directory doesn't need to
exist but must have proper permissions so that the user that Packer uses is
able to create directories and write into this folder. If the permissions
@ -122,7 +122,7 @@ template if you'd like to set custom configurations.
The default value for the configuration template is:
```liquid
``` liquid
cookbook_path [{{.CookbookPaths}}]
```
@ -144,18 +144,18 @@ variables available to use:
By default, Packer uses the following command (broken across multiple lines for
readability) to execute Chef:
```liquid
``` liquid
{{if .Sudo}}sudo {{end}}chef-solo \
--no-color \
-c {{.ConfigPath}} \
-j {{.JsonPath}}
```
When guest_os_type is set to "windows", Packer uses the following command to
When guest\_os\_type is set to "windows", Packer uses the following command to
execute Chef. The full path to Chef is required because the PATH environment
variable changes don't immediately propogate to running processes.
```liquid
``` liquid
c:/opscode/chef/bin/chef-solo.bat \
--no-color \
-c {{.ConfigPath}} \
@ -177,15 +177,15 @@ By default, Packer uses the following command (broken across multiple lines for
readability) to install Chef. This command can be customized if you want to
install Chef in another way.
```text
``` text
curl -L https://omnitruck.chef.io/install.sh | \
{{if .Sudo}}sudo{{end}} bash -s --{{if .Version}} -v {{.Version}}{{end}}
```
When guest_os_type is set to "windows", Packer uses the following command to
When guest\_os\_type is set to "windows", Packer uses the following command to
install the latest version of Chef:
```text
``` text
powershell.exe -Command \". { iwr -useb https://omnitruck.chef.io/install.ps1 } | iex; install\"
```

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-provisioners-converge
page_title: Converge - Provisioners
description: |-
description: |
The converge Packer provisioner uses Converge modules to provision the
machine.
layout: docs
page_title: 'Converge - Provisioners'
sidebar_current: 'docs-provisioners-converge'
---
# Converge Provisioner
@ -22,7 +22,7 @@ new images.
The example below is fully functional.
```json
``` json
{
"type": "converge",
"module": "https://raw.githubusercontent.com/asteris-llc/converge/master/samples/fileContent.hcl",
@ -86,7 +86,7 @@ directory.
By default, Packer uses the following command (broken across multiple lines for readability) to execute Converge:
```liquid
``` liquid
cd {{.WorkingDirectory}} && \
{{if .Sudo}}sudo {{end}}converge apply \
--local \
@ -108,7 +108,7 @@ contain various template variables:
By default, Packer uses the following command to bootstrap Converge:
```liquid
``` liquid
curl -s https://get.converge.sh | {{if .Sudo}}sudo {{end}}sh {{if ne .Version ""}}-s -- -v {{.Version}}{{end}}
```

View File

@ -1,12 +1,12 @@
---
layout: docs
sidebar_current: docs-provisioners-custom
page_title: Custom - Provisioners
description: |-
description: |
Packer is extensible, allowing you to write new provisioners without having to
modify the core source code of Packer itself. Documentation for creating new
provisioners is covered in the custom provisioners page of the Packer plugin
section.
layout: docs
page_title: 'Custom - Provisioners'
sidebar_current: 'docs-provisioners-custom'
---
# Custom Provisioner

View File

@ -1,12 +1,12 @@
---
layout: docs
sidebar_current: docs-provisioners-file
page_title: File - Provisioners
description: |-
description: |
The file Packer provisioner uploads files to machines built by Packer. The
recommended usage of the file provisioner is to use it to upload files, and
then use shell provisioner to move them to the proper place, set permissions,
etc.
layout: docs
page_title: 'File - Provisioners'
sidebar_current: 'docs-provisioners-file'
---
# File Provisioner
@ -22,7 +22,7 @@ The file provisioner can upload both single files and complete directories.
## Basic Example
```json
``` json
{
"type": "file",
"source": "app.tar.gz",
@ -86,7 +86,7 @@ treat local symlinks as regular files. If you wish to preserve symlinks when
uploading, it's recommended that you use `tar`. Below is an example of what
that might look like:
```text
``` text
$ ls -l files
total 16
drwxr-xr-x 3 mwhooker staff 102 Jan 27 17:10 a
@ -95,7 +95,7 @@ lrwxr-xr-x 1 mwhooker staff 1 Jan 27 17:10 b -> a
lrwxr-xr-x 1 mwhooker staff 5 Jan 27 17:10 file1link -> file1
```
```json
``` json
{
"provisioners": [
{

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-provisioners
page_title: Provisioners
description: |-
description: |
Provisioners use builtin and third-party software to install and configure the
machine image after booting.
layout: docs
page_title: Provisioners
sidebar_current: 'docs-provisioners'
---
# Provisioners

View File

@ -1,11 +1,11 @@
---
layout: docs
sidebar_current: docs-provisioners-powershell
page_title: PowerShell - Provisioners
description: |-
description: |
The shell Packer provisioner provisions machines built by Packer using shell
scripts. Shell provisioning is the easiest way to get software installed and
configured on a machine.
layout: docs
page_title: 'PowerShell - Provisioners'
sidebar_current: 'docs-provisioners-powershell'
---
# PowerShell Provisioner
@ -19,7 +19,7 @@ It assumes that the communicator in use is WinRM.
The example below is fully functional.
```json
``` json
{
"type": "powershell",
"inline": ["dir c:\\"]
@ -84,7 +84,6 @@ Optional parameters:
- `valid_exit_codes` (list of ints) - Valid exit codes for the script. By
default this is just 0.
## Default Environmental Variables
In addition to being able to specify custom environmental variables using the

View File

@ -1,13 +1,13 @@
---
layout: docs
sidebar_current: docs-provisioners-puppet-masterless
page_title: Puppet Masterless - Provisioners
description: |-
description: |
The masterless Puppet Packer provisioner configures Puppet to run on the
machines by Packer from local modules and manifest files. Modules and
manifests can be uploaded from your local machine to the remote machine or can
simply use remote paths. Puppet is run in masterless mode, meaning it never
communicates to a Puppet master.
layout: docs
page_title: 'Puppet Masterless - Provisioners'
sidebar_current: 'docs-provisioners-puppet-masterless'
---
# Puppet (Masterless) Provisioner
@ -21,7 +21,7 @@ remote paths (perhaps obtained using something like the shell provisioner).
Puppet is run in masterless mode, meaning it never communicates to a Puppet
master.
-> **Note:** Puppet will *not* be installed automatically by this
-&gt; **Note:** Puppet will *not* be installed automatically by this
provisioner. This provisioner expects that Puppet is already installed on the
machine. It is common practice to use the [shell
provisioner](/docs/provisioners/shell.html) before the Puppet provisioner to do
@ -32,7 +32,7 @@ this.
The example below is fully functional and expects the configured manifest file
to exist relative to your working directory.
```json
``` json
{
"type": "puppet-masterless",
"manifest_file": "site.pp"
@ -82,7 +82,7 @@ Optional parameters:
`manifest_file`. It is a separate directory that will be set as the
"manifestdir" setting on Puppet.
~> `manifest_dir` is passed to `puppet apply` as the `--manifestdir` option.
~&gt; `manifest_dir` is passed to `puppet apply` as the `--manifestdir` option.
This option was deprecated in puppet 3.6, and removed in puppet 4.0. If you have
multiple manifests you should use `manifest_file` instead.
@ -117,7 +117,7 @@ multiple manifests you should use `manifest_file` instead.
By default, Packer uses the following command (broken across multiple lines for
readability) to execute Puppet:
```liquid
``` liquid
cd {{.WorkingDir}} && \
{{.FacterVars}}{{if .Sudo}} sudo -E {{end}} \
{{if ne .PuppetBinDir \"\"}}{{.PuppetBinDir}}{{end}}puppet apply \

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-provisioners-puppet-server
page_title: Puppet Server - Provisioners
description: |-
description: |
The puppet-server Packer provisioner provisions Packer machines with Puppet
by connecting to a Puppet master.
layout: docs
page_title: 'Puppet Server - Provisioners'
sidebar_current: 'docs-provisioners-puppet-server'
---
# Puppet Server Provisioner
@ -14,7 +14,7 @@ Type: `puppet-server`
The `puppet-server` Packer provisioner provisions Packer machines with Puppet by
connecting to a Puppet master.
-> **Note:** Puppet will *not* be installed automatically by this
-&gt; **Note:** Puppet will *not* be installed automatically by this
provisioner. This provisioner expects that Puppet is already installed on the
machine. It is common practice to use the [shell
provisioner](/docs/provisioners/shell.html) before the Puppet provisioner to do
@ -25,7 +25,7 @@ this.
The example below is fully functional and expects a Puppet server to be
accessible from your network.
```json
``` json
{
"type": "puppet-server",
"options": "--test --pluginsync",
@ -86,7 +86,7 @@ listed below:
variables](/docs/templates/engine.html) available. See
below for more information. By default, Packer uses the following command:
```liquid
``` liquid
{{.FacterVars}} {{if .Sudo}} sudo -E {{end}} \
{{if ne .PuppetBinDir \"\"}}{{.PuppetBinDir}}/{{end}}puppet agent --onetime --no-daemonize \
{{if ne .PuppetServer \"\"}}--server='{{.PuppetServer}}' {{end}} \

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-provisioners-salt-masterless
page_title: Salt Masterless - Provisioners
description: |-
description: |
The salt-masterless Packer provisioner provisions machines built by Packer
using Salt states, without connecting to a Salt master.
layout: docs
page_title: 'Salt Masterless - Provisioners'
sidebar_current: 'docs-provisioners-salt-masterless'
---
# Salt Masterless Provisioner
@ -18,7 +18,7 @@ using [Salt](http://saltstack.com/) states, without connecting to a Salt master.
The example below is fully functional.
```json
``` json
{
"type": "salt-masterless",
"local_state_tree": "/Users/me/salt"
@ -66,8 +66,7 @@ Optional:
uploaded to the `/etc/salt` on the remote. This option overrides the
`remote_state_tree` or `remote_pillar_roots` options.
- `grains_file` (string) - The path to your local [grains file](
https://docs.saltstack.com/en/latest/topics/grains). This will be
- `grains_file` (string) - The path to your local [grains file](https://docs.saltstack.com/en/latest/topics/grains). This will be
uploaded to `/etc/salt/grains` on the remote.
- `skip_bootstrap` (boolean) - By default the salt provisioner runs [salt

View File

@ -1,11 +1,11 @@
---
layout: docs
sidebar_current: docs-provisioners-shell-local
page_title: Shell (Local) - Provisioners
description: |-
description: |
The shell Packer provisioner provisions machines built by Packer using shell
scripts. Shell provisioning is the easiest way to get software installed and
configured on a machine.
layout: docs
page_title: 'Shell (Local) - Provisioners'
sidebar_current: 'docs-provisioners-shell-local'
---
# Local Shell Provisioner
@ -20,7 +20,7 @@ shell scripts on a remote machine.
The example below is fully functional.
```json
``` json
{
"type": "shell-local",
"command": "echo foo"

View File

@ -1,11 +1,11 @@
---
layout: docs
sidebar_current: docs-provisioners-shell-remote
page_title: Shell - Provisioners
description: |-
description: |
The shell Packer provisioner provisions machines built by Packer using shell
scripts. Shell provisioning is the easiest way to get software installed and
configured on a machine.
layout: docs
page_title: 'Shell - Provisioners'
sidebar_current: 'docs-provisioners-shell-remote'
---
# Shell Provisioner
@ -16,7 +16,7 @@ The shell Packer provisioner provisions machines built by Packer using shell
scripts. Shell provisioning is the easiest way to get software installed and
configured on a machine.
-> **Building Windows images?** You probably want to use the
-&gt; **Building Windows images?** You probably want to use the
[PowerShell](/docs/provisioners/powershell.html) or [Windows
Shell](/docs/provisioners/windows-shell.html) provisioners.
@ -24,7 +24,7 @@ Shell](/docs/provisioners/windows-shell.html) provisioners.
The example below is fully functional.
```json
``` json
{
"type": "shell",
"inline": ["echo foo"]
@ -87,11 +87,11 @@ Optional parameters:
the machine. This defaults to '/tmp'.
- `remote_file` (string) - The filename the uploaded script will have on the machine.
This defaults to 'script_nnn.sh'.
This defaults to 'script\_nnn.sh'.
- `remote_path` (string) - The full path to the uploaded script will have on the
machine. By default this is remote_folder/remote_file, if set this option will
override both remote_folder and remote_file.
machine. By default this is remote\_folder/remote\_file, if set this option will
override both remote\_folder and remote\_file.
- `skip_clean` (boolean) - If true, specifies that the helper scripts
uploaded to the system will not be removed by Packer. This defaults to
@ -116,7 +116,7 @@ Some operating systems default to a non-root user. For example if you login as
`ubuntu` and can sudo using the password `packer`, then you'll want to change
`execute_command` to be:
```text
``` text
"echo 'packer' | sudo -S sh -c '{{ .Vars }} {{ .Path }}'"
```
@ -131,7 +131,7 @@ privileges without worrying about password prompts.
The following contrived example shows how to pass environment variables and
change the permissions of the script to be executed:
```text
``` text
chmod +x {{ .Path }}; chmod 0700 {{ .Path}}; env {{ .Vars }} {{ .Path }}
```
@ -168,9 +168,9 @@ scripts. The amount of time the provisioner will wait is configured using
Sometimes, when executing a command like `reboot`, the shell script will return
and Packer will start executing the next one before SSH actually quits and the
machine restarts. For this, put use "pause_before" to make Packer wait before executing the next script:
machine restarts. For this, put use "pause\_before" to make Packer wait before executing the next script:
```json
``` json
{
"type": "shell",
"script": "script.sh",
@ -183,7 +183,7 @@ causing the provisioner to hang despite a reboot occurring. In this case, make
sure you shut down the network interfaces on reboot or in your shell script. For
example, on Gentoo:
```text
``` text
/etc/init.d/net.eth0 stop
```
@ -203,7 +203,7 @@ provisioner](/docs/provisioners/file.html) (more secure) or using `ssh-keyscan`
to populate the file (less secure). An example of the latter accessing github
would be:
```json
``` json
{
"type": "shell",
"inline": [
@ -246,7 +246,7 @@ would be:
create race conditions. Your first provisioner can tell the machine to wait
until it completely boots.
```json
``` json
{
"type": "shell",
"inline": [ "sleep 10" ]

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-provisioners-windows-restart
page_title: Windows Restart - Provisioners
description: |-
description: |
The Windows restart provisioner restarts a Windows machine and waits for it to
come back up.
layout: docs
page_title: 'Windows Restart - Provisioners'
sidebar_current: 'docs-provisioners-windows-restart'
---
# Windows Restart Provisioner
@ -25,7 +25,7 @@ through the Windows Remote Management (WinRM) service, not by ACPI functions, so
The example below is fully functional.
```json
``` json
{
"type": "windows-restart"
}
@ -38,8 +38,7 @@ The reference of available configuration options is listed below.
Optional parameters:
- `restart_command` (string) - The command to execute to initiate the
restart. By default this is `shutdown /r /c "packer restart" /t 5 && net
stop winrm`. A key action of this is to stop WinRM so that Packer can
restart. By default this is `shutdown /r /c "packer restart" /t 5 && net stop winrm`. A key action of this is to stop WinRM so that Packer can
detect it is rebooting.
- `restart_check_command` (string) - A command to execute to check if the

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-provisioners-windows-shell
page_title: Windows Shell - Provisioners
description: |-
description: |
The windows-shell Packer provisioner runs commands on Windows using the cmd
shell.
layout: docs
page_title: 'Windows Shell - Provisioners'
sidebar_current: 'docs-provisioners-windows-shell'
---
# Windows Shell Provisioner
@ -18,7 +18,7 @@ The windows-shell Packer provisioner runs commands on a Windows machine using
The example below is fully functional.
```json
``` json
{
"type": "windows-shell",
"inline": ["dir c:\\"]
@ -75,7 +75,6 @@ Optional parameters:
system reboot. Set this to a higher value if reboots take a longer amount
of time.
## Default Environmental Variables
In addition to being able to specify custom environmental variables using the

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-templates-builders
page_title: Builders - Templates
description: |-
description: |
Within the template, the builders section contains an array of all the
builders that Packer should use to generate machine images for the template.
layout: docs
page_title: 'Builders - Templates'
sidebar_current: 'docs-templates-builders'
---
# Template Builders
@ -23,7 +23,7 @@ referenced from the documentation for that specific builder.
Within a template, a section of builder definitions looks like this:
```json
``` json
{
"builders": [
// ... one or more builder definitions here
@ -45,7 +45,7 @@ These are placed directly within the builder definition.
An example builder definition is shown below, in this case configuring the AWS
builder:
```json
``` json
{
"type": "amazon-ebs",
"access_key": "...",

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-templates-communicators
page_title: Communicators - Templates
description: |-
description: |
Communicators are the mechanism Packer uses to upload files, execute scripts,
etc. with the machine being created.
layout: docs
page_title: 'Communicators - Templates'
sidebar_current: 'docs-templates-communicators'
---
# Template Communicators
@ -36,7 +36,7 @@ configure everything.
However, to specify a communicator, you set the `communicator` key within
a build. Multiple builds can have different communicators. Example:
```json
``` json
{
"builders": [
{
@ -68,7 +68,7 @@ The SSH communicator has the following options:
with the bastion host.
- `ssh_bastion_port` (integer) - The port of the bastion host. Defaults to
22.
1.
- `ssh_bastion_private_key_file` (string) - A private key file to use
to authenticate with the bastion host.

View File

@ -1,11 +1,11 @@
---
layout: docs
sidebar_current: docs-templates-engine
page_title: Template Engine - Templates
description: |-
description: |
All strings within templates are processed by a common Packer templating
engine, where variables and functions can be used to modify the value of a
configuration parameter at runtime.
layout: docs
page_title: 'Template Engine - Templates'
sidebar_current: 'docs-templates-engine'
---
# Template Engine
@ -16,17 +16,16 @@ configuration parameter at runtime.
The syntax of templates uses the following conventions:
* Anything template related happens within double-braces: `{{ }}`.
* Functions are specified directly within the braces, such as `{{timestamp}}`.
* Template variables are prefixed with a period and capitalized, such as
- Anything template related happens within double-braces: `{{ }}`.
- Functions are specified directly within the braces, such as `{{timestamp}}`.
- Template variables are prefixed with a period and capitalized, such as
`{{.Variable}}`.
## Functions
Functions perform operations on and within strings, for example the `{{timestamp}}` function can be used in any string to generate
the current timestamp. This is useful for configurations that require unique
keys, such as AMI names. By setting the AMI name to something like `My Packer
AMI {{timestamp}}`, the AMI name will be unique down to the second. If you
keys, such as AMI names. By setting the AMI name to something like `My Packer AMI {{timestamp}}`, the AMI name will be unique down to the second. If you
need greater than one second granularity, you should use `{{uuid}}`, for
example when you have multiple builders in the same template.
@ -55,7 +54,7 @@ Here is a full list of the available functions for reference.
Template variables are special variables automatically set by Packer at build time. Some builders, provisioners and other components have template variables that are available only for that component. Template variables are recognizable because they're prefixed by a period, such as `{{ .Name }}`. For example, when using the [`shell`](/docs/builders/vmware-iso.html) builder template variables are available to customize the [`execute_command`](/docs/provisioners/shell.html#execute_command) parameter used to determine how Packer will run the shell command.
```liquid
``` liquid
{
"provisioners": [
{
@ -71,7 +70,7 @@ Template variables are special variables automatically set by Packer at build ti
The `{{ .Vars }}` and `{{ .Path }}` template variables will be replaced with the list of the environment variables and the path to the script to be executed respectively.
-> **Note:** In addition to template variables, you can specify your own user variables. See the [user variable](/docs/templates/user-variables.html) documentation for more information on user variables.
-&gt; **Note:** In addition to template variables, you can specify your own user variables. See the [user variable](/docs/templates/user-variables.html) documentation for more information on user variables.
# isotime Function Format Reference
@ -168,14 +167,13 @@ Formatting for the function `isotime` uses the magic reference date **Mon Jan 2
</td>
</tr>
</table>
*The values in parentheses are the abbreviated, or 24-hour clock values*
Note that "-0700" is always formatted into "+0000" because `isotime` is always UTC time.
Here are some example formatted time, using the above format options:
```liquid
``` liquid
isotime = June 7, 7:22:43pm 2014
{{isotime "2006-01-02"}} = 2014-06-07
@ -186,7 +184,7 @@ isotime = June 7, 7:22:43pm 2014
Please note that double quote characters need escaping inside of templates (in this case, on the `ami_name` value):
```json
``` json
{
"builders": [
{
@ -203,4 +201,4 @@ Please note that double quote characters need escaping inside of templates (in t
}
```
-> **Note:** See the [Amazon builder](/docs/builders/amazon.html) documentation for more information on how to correctly configure the Amazon builder in this example.
-&gt; **Note:** See the [Amazon builder](/docs/builders/amazon.html) documentation for more information on how to correctly configure the Amazon builder in this example.

View File

@ -1,13 +1,13 @@
---
layout: docs
page_title: Templates
sidebar_current: docs-templates
description: |-
description: |
Templates are JSON files that configure the various components of Packer in
order to create one or more machine images. Templates are portable, static,
and readable and writable by both humans and computers. This has the added
benefit of being able to not only create and modify templates by hand, but
also write scripts to dynamically create or modify templates.
layout: docs
page_title: Templates
sidebar_current: 'docs-templates'
---
# Templates
@ -70,7 +70,7 @@ JSON doesn't support comments and Packer reports unknown keys as validation
errors. If you'd like to comment your template, you can prefix a *root level*
key with an underscore. Example:
```json
``` json
{
"_comment": "This is a comment",
"builders": [
@ -86,9 +86,9 @@ builders, provisioners, etc. will still result in validation errors.
Below is an example of a basic template that could be invoked with `packer build`. It would create an instance in AWS, and once running copy a script to it and run that script using SSH.
-> **Note:** This example requires an account with Amazon Web Services. There are a number of parameters which need to be provided for a functional build to take place. See the [Amazon builder](/docs/builders/amazon.html) documentation for more information.
-&gt; **Note:** This example requires an account with Amazon Web Services. There are a number of parameters which need to be provided for a functional build to take place. See the [Amazon builder](/docs/builders/amazon.html) documentation for more information.
```json
``` json
{
"builders": [
{

View File

@ -1,11 +1,11 @@
---
layout: docs
sidebar_current: docs-templates-post-processors
page_title: Post-Processors - Templates
description: |-
description: |
The post-processor section within a template configures any post-processing
that will be done to images built by the builders. Examples of post-processing
would be compressing files, uploading artifacts, etc.
layout: docs
page_title: 'Post-Processors - Templates'
sidebar_current: 'docs-templates-post-processors'
---
# Template Post-Processors
@ -25,7 +25,7 @@ post-processor.
Within a template, a section of post-processor definitions looks like this:
```json
``` json
{
"post-processors": [
// ... one or more post-processor definitions here
@ -51,7 +51,7 @@ A **simple definition** is just a string; the name of the post-processor. An
example is shown below. Simple definitions are used when no additional
configuration is needed for the post-processor.
```json
``` json
{
"post-processors": ["compress"]
}
@ -63,7 +63,7 @@ post-processor, but may also contain additional configuration for the
post-processor. A detailed definition is used when additional configuration is
needed beyond simply the type for the post-processor. An example is shown below.
```json
``` json
{
"post-processors": [
{
@ -84,7 +84,7 @@ compressed then uploaded, but the compressed result is not kept.
It is very important that any post processors that need to be run in order, be sequenced!
```json
``` json
{
"post-processors": [
[
@ -102,7 +102,7 @@ simply shortcuts for a **sequence** definition of only one element.
It is important to sequence post processors when creating and uploading vagrant boxes to Atlas via Packer. Using a sequence will ensure that the post processors are ran in order and creates the vagrant box prior to uploading the box to Atlas.
```json
``` json
{
"post-processors": [
[
@ -138,7 +138,7 @@ In some cases, however, you may want to keep the intermediary artifacts. You can
tell Packer to keep these artifacts by setting the `keep_input_artifact`
configuration to `true`. An example is shown below:
```json
``` json
{
"post-processors": [
{
@ -154,7 +154,7 @@ post-processor. If you're specifying a sequence of post-processors, then all
intermediaries are discarded by default except for the input artifacts to
post-processors that explicitly state to keep the input artifact.
-> **Note:** The intuitive reader may be wondering what happens if multiple
-&gt; **Note:** The intuitive reader may be wondering what happens if multiple
post-processors are specified (not in a sequence). Does Packer require the
configuration to keep the input artifact on all the post-processors? The answer
is no, of course not. Packer is smart enough to figure out that at least one
@ -172,7 +172,7 @@ effectively the same. `only` and `except` can only be specified on "detailed"
configurations. If you have a sequence of post-processors to run, `only` and
`except` will only affect that single post-processor in the sequence.
```json
``` json
{
"type": "vagrant",
"only": ["virtualbox-iso"]

View File

@ -1,11 +1,11 @@
---
layout: docs
sidebar_current: docs-templates-provisioners
page_title: Provisioners - Templates
description: |-
description: |
Within the template, the provisioners section contains an array of all the
provisioners that Packer should use to install and configure software within
running machines prior to turning them into machine images.
layout: docs
page_title: 'Provisioners - Templates'
sidebar_current: 'docs-templates-provisioners'
---
# Template Provisioners
@ -25,7 +25,7 @@ be referenced from the documentation for that specific provisioner.
Within a template, a section of provisioner definitions looks like this:
```json
``` json
{
"provisioners": [
// ... one or more provisioner definitions here
@ -50,7 +50,7 @@ specifies a path to a shell script to execute within the machines being created.
An example provisioner definition is shown below, configuring the shell
provisioner to run a local script within the machines:
```json
``` json
{
"type": "shell",
"script": "script.sh"
@ -67,7 +67,7 @@ provisioner on anything other than the specified builds.
An example of `only` being used is shown below, but the usage of `except` is
effectively the same:
```json
``` json
{
"type": "shell",
"script": "script.sh",
@ -97,7 +97,7 @@ identical. However, they may initially need to be run differently.
This example is shown below:
```json
``` json
{
"type": "shell",
"script": "script.sh",
@ -126,7 +126,7 @@ Every provisioner definition in a Packer template can take a special
configuration `pause_before` that is the amount of time to pause before running
that provisioner. By default, there is no pause. An example is shown below:
```json
``` json
{
"type": "shell",
"script": "script.sh",

View File

@ -1,10 +1,10 @@
---
layout: docs
sidebar_current: docs-templates-push
page_title: Push - Templates
description: |-
description: |
Within the template, the push section configures how a template can be pushed
to a remote build service.
layout: docs
page_title: 'Push - Templates'
sidebar_current: 'docs-templates-push'
---
# Template Push
@ -22,7 +22,7 @@ services will come in the form of plugins in the future.
Within a template, a push configuration section looks like this:
```json
``` json
{
"push": {
// ... push configuration here
@ -69,7 +69,7 @@ each category, the available configuration keys are alphabetized.
A push configuration section with minimal options:
```json
``` json
{
"push": {
"name": "hashicorp/precise64"
@ -80,7 +80,7 @@ A push configuration section with minimal options:
A push configuration specifying Packer to inspect the VCS and list individual
files to include:
```json
``` json
{
"push": {
"name": "hashicorp/precise64",

View File

@ -1,13 +1,13 @@
---
layout: docs
sidebar_current: docs-templates-user-variables
page_title: User Variables - Templates
description: |-
description: |
User variables allow your templates to be further configured with variables
from the command-line, environment variables, or files. This lets you
parameterize your templates so that you can keep secret tokens,
environment-specific data, and other types of information out of your
templates. This maximizes the portability and shareability of the template.
layout: docs
page_title: 'User Variables - Templates'
sidebar_current: 'docs-templates-user-variables'
---
# Template User Variables
@ -34,7 +34,7 @@ The `variables` section is a key/value mapping of the user variable name
to a default value. A default value can be the empty string. An example
is shown below:
```json
``` json
{
"variables": {
"aws_access_key": "",
@ -72,7 +72,7 @@ The `env` function is available *only* within the default value of a user
variable, allowing you to default a user variable to an environment variable.
An example is shown below:
```json
``` json
{
"variables": {
"my_secret": "{{env `MY_SECRET`}}",
@ -83,7 +83,7 @@ An example is shown below:
This will default "my\_secret" to be the value of the "MY\_SECRET" environment
variable (or an empty string if it does not exist).
-> **Why can't I use environment variables elsewhere?** User variables are
-&gt; **Why can't I use environment variables elsewhere?** User variables are
the single source of configurable input to a template. We felt that having
environment variables used *anywhere* in a template would confuse the user
about the possible inputs to a template. By allowing environment variables
@ -91,7 +91,7 @@ only within default values for user variables, user variables remain as the
single source of input to a template that a user can easily discover using
`packer inspect`.
-> **Why can't I use `~` for home variable?** `~` is an special variable
-&gt; **Why can't I use `~` for home variable?** `~` is an special variable
that is evaluated by shell during a variable expansion. As Packer doesn't run
inside a shell, it won't expand `~`.
@ -110,7 +110,7 @@ example above, we could build our template using the command below. The
command is split across multiple lines for readability, but can of
course be a single line.
```text
``` text
$ packer build \
-var 'aws_access_key=foo' \
-var 'aws_secret_key=bar' \
@ -127,7 +127,7 @@ Variables can also be set from an external JSON file. The `-var-file` flag reads
a file containing a key/value mapping of variables to values and sets
those variables. An example JSON file may look like this:
```json
``` json
{
"aws_access_key": "foo",
"aws_secret_key": "bar"
@ -138,7 +138,7 @@ It is a single JSON object where the keys are variables and the values are the
variable values. Assuming this file is in `variables.json`, we can build our
template using the following command:
```text
``` text
$ packer build -var-file=variables.json template.json
```
@ -151,7 +151,7 @@ expect. Variables set later in the command override variables set
earlier. So, for example, in the following command with the above
`variables.json` file:
```text
``` text
$ packer build \
-var 'aws_access_key=bar' \
-var-file=variables.json \
@ -162,9 +162,9 @@ $ packer build \
Results in the following variables:
| Variable | Value |
| -------- | --------- |
| aws_access_key | foo |
| aws_secret_key | baz |
|------------------|-------|
| aws\_access\_key | foo |
| aws\_secret\_key | baz |
# Recipes
@ -176,7 +176,7 @@ be able to do this by referencing the variable within a command that
you execute. For example, here is how to make a `shell-local`
provisioner only run if the `do_nexpose_scan` variable is non-empty.
```json
``` json
{
"type": "shell-local",
"command": "if [ ! -z \"{{user `do_nexpose_scan`}}\" ]; then python -u trigger_nexpose_scan.py; fi"
@ -187,7 +187,7 @@ provisioner only run if the `do_nexpose_scan` variable is non-empty.
In order to use `$HOME` variable, you can create a `home` variable in Packer:
```json
``` json
{
"variables": {
"home": "{{env `HOME`}}"
@ -197,7 +197,7 @@ In order to use `$HOME` variable, you can create a `home` variable in Packer:
And this will be available to be used in the rest of the template, i.e.:
```json
``` json
{
"builders": [
{