diff --git a/Makefile b/Makefile index ba2629961..f86b30e06 100644 --- a/Makefile +++ b/Makefile @@ -60,6 +60,9 @@ fmt: ## Format Go code fmt-check: ## Check go code formatting $(CURDIR)/scripts/gofmtcheck.sh $(GOFMT_FILES) +fmt-docs: + @find ./website/source/docs -name "*.md" -exec pandoc --wrap auto --columns 79 --atx-headers -s -f "markdown_github+yaml_metadata_block" -t "markdown_github+yaml_metadata_block" {} -o {} \; + # Install js-beautify with npm install -g js-beautify fmt-examples: find examples -name *.json | xargs js-beautify -r -s 2 -n -eol "\n" @@ -91,4 +94,4 @@ updatedeps: help: @grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' -.PHONY: bin checkversion ci default deps fmt fmt-examples generate releasebin test testacc testrace updatedeps +.PHONY: bin checkversion ci default deps fmt fmt-docs fmt-examples generate releasebin test testacc testrace updatedeps diff --git a/website/source/docs/basics/terminology.html.md b/website/source/docs/basics/terminology.html.md index 10249e4ac..4754bf886 100644 --- a/website/source/docs/basics/terminology.html.md +++ b/website/source/docs/basics/terminology.html.md @@ -1,12 +1,12 @@ --- +description: | + There are a handful of terms used throughout the Packer documentation where + the meaning may not be immediately obvious if you haven't used Packer before. + Luckily, there are relatively few. This page documents all the terminology + required to understand and use Packer. The terminology is in alphabetical + order for quick referencing. layout: docs page_title: Terminology -description: |- - There are a handful of terms used throughout the Packer documentation where - the meaning may not be immediately obvious if you haven't used Packer before. - Luckily, there are relatively few. This page documents all the terminology - required to understand and use Packer. The terminology is in alphabetical - order for quick referencing. --- # Packer Terminology @@ -17,39 +17,39 @@ Luckily, there are relatively few. This page documents all the terminology required to understand and use Packer. The terminology is in alphabetical order for quick referencing. -- `Artifacts` are the results of a single build, and are usually a set of IDs or - files to represent a machine image. Every builder produces a single artifact. - As an example, in the case of the Amazon EC2 builder, the artifact is a set of - AMI IDs (one per region). For the VMware builder, the artifact is a directory - of files comprising the created virtual machine. +- `Artifacts` are the results of a single build, and are usually a set of IDs or + files to represent a machine image. Every builder produces a single artifact. + As an example, in the case of the Amazon EC2 builder, the artifact is a set of + AMI IDs (one per region). For the VMware builder, the artifact is a directory + of files comprising the created virtual machine. -- `Builds` are a single task that eventually produces an image for a single - platform. Multiple builds run in parallel. Example usage in a sentence: "The - Packer build produced an AMI to run our web application." Or: "Packer is - running the builds now for VMware, AWS, and VirtualBox." +- `Builds` are a single task that eventually produces an image for a single + platform. Multiple builds run in parallel. Example usage in a sentence: "The + Packer build produced an AMI to run our web application." Or: "Packer is + running the builds now for VMware, AWS, and VirtualBox." -- `Builders` are components of Packer that are able to create a machine image - for a single platform. Builders read in some configuration and use that to run - and generate a machine image. A builder is invoked as part of a build in order - to create the actual resulting images. Example builders include VirtualBox, - VMware, and Amazon EC2. Builders can be created and added to Packer in the - form of plugins. +- `Builders` are components of Packer that are able to create a machine image + for a single platform. Builders read in some configuration and use that to run + and generate a machine image. A builder is invoked as part of a build in order + to create the actual resulting images. Example builders include VirtualBox, + VMware, and Amazon EC2. Builders can be created and added to Packer in the + form of plugins. -- `Commands` are sub-commands for the `packer` program that perform some job. An - example command is "build", which is invoked as `packer build`. Packer ships - with a set of commands out of the box in order to define its command-line - interface. +- `Commands` are sub-commands for the `packer` program that perform some job. An + example command is "build", which is invoked as `packer build`. Packer ships + with a set of commands out of the box in order to define its command-line + interface. -- `Post-processors` are components of Packer that take the result of a builder - or another post-processor and process that to create a new artifact. Examples - of post-processors are compress to compress artifacts, upload to upload - artifacts, etc. +- `Post-processors` are components of Packer that take the result of a builder + or another post-processor and process that to create a new artifact. Examples + of post-processors are compress to compress artifacts, upload to upload + artifacts, etc. -- `Provisioners` are components of Packer that install and configure software - within a running machine prior to that machine being turned into a static - image. They perform the major work of making the image contain useful - software. Example provisioners include shell scripts, Chef, Puppet, etc. +- `Provisioners` are components of Packer that install and configure software + within a running machine prior to that machine being turned into a static + image. They perform the major work of making the image contain useful + software. Example provisioners include shell scripts, Chef, Puppet, etc. -- `Templates` are JSON files which define one or more builds by configuring the - various components of Packer. Packer is able to read a template and use that - information to create multiple machine images in parallel. +- `Templates` are JSON files which define one or more builds by configuring the + various components of Packer. Packer is able to read a template and use that + information to create multiple machine images in parallel. diff --git a/website/source/docs/builders/alicloud-ecs.html.md b/website/source/docs/builders/alicloud-ecs.html.md index f4d22c100..6db22f4fd 100644 --- a/website/source/docs/builders/alicloud-ecs.html.md +++ b/website/source/docs/builders/alicloud-ecs.html.md @@ -4,7 +4,7 @@ description: | customized images based on an existing base images. layout: docs page_title: Alicloud Image Builder -... +--- # Alicloud Image Builder @@ -22,174 +22,170 @@ builder. ### Required: -- `access_key` (string) - This is the Alicloud access key. It must be provided, - but it can also be sourced from the `ALICLOUD_ACCESS_KEY` environment - variable. +- `access_key` (string) - This is the Alicloud access key. It must be provided, + but it can also be sourced from the `ALICLOUD_ACCESS_KEY` environment + variable. -- `secret_key` (string) - This is the Alicloud secret key. It must be provided, - but it can also be sourced from the `ALICLOUD_SECRET_KEY` environment - variable. +- `secret_key` (string) - This is the Alicloud secret key. It must be provided, + but it can also be sourced from the `ALICLOUD_SECRET_KEY` environment + variable. -- `region` (string) - This is the Alicloud region. It must be provided, but it - can also be sourced from the `ALICLOUD_REGION` environment variables. +- `region` (string) - This is the Alicloud region. It must be provided, but it + can also be sourced from the `ALICLOUD_REGION` environment variables. -- `instance_type` (string) - Type of the instance. For values, see [Instance - Type Table](). You can also obtain the latest instance type table by invoking - the [Querying Instance Type - Table](https://intl.aliyun.com/help/doc-detail/25620.htm?spm=a3c0i.o25499en.a3.6.Dr1bik) - interface. - -- `image_name` (string) - The name of the user-defined image, [2, 128] English - or Chinese characters. It must begin with an uppercase/lowercase letter or - a Chinese character, and may contain numbers, `_` or `-`. It cannot begin with - `http://` or `https://`. - -- `source_image` (string) - This is the base image id which you want to create - your customized images. +- `instance_type` (string) - Type of the instance. For values, see [Instance + Type Table](). You can also obtain the latest instance type table by invoking + the [Querying Instance Type + Table](https://intl.aliyun.com/help/doc-detail/25620.htm?spm=a3c0i.o25499en.a3.6.Dr1bik) + interface. +- `image_name` (string) - The name of the user-defined image, \[2, 128\] English + or Chinese characters. It must begin with an uppercase/lowercase letter or + a Chinese character, and may contain numbers, `_` or `-`. It cannot begin with + `http://` or `https://`. +- `source_image` (string) - This is the base image id which you want to create + your customized images. ### Optional: -- `skip_region_validation` (bool) - The region validation can be skipped if this - value is true, the default value is false. +- `skip_region_validation` (bool) - The region validation can be skipped if this + value is true, the default value is false. -- `image_description` (string) - The description of the image, with a length - limit of 0 to 256 characters. Leaving it blank means null, which is the - default value. It cannot begin with http:// or https://. +- `image_description` (string) - The description of the image, with a length + limit of 0 to 256 characters. Leaving it blank means null, which is the + default value. It cannot begin with `http://` or `https://`. -- `image_version` (string) - The version number of the image, with a length limit - of 1 to 40 English characters. +- `image_version` (string) - The version number of the image, with a length limit + of 1 to 40 English characters. -- `image_share_account` (array of string) - The IDs of to-be-added Aliyun - accounts to which the image is shared. The number of accounts is 1 to 10. If - number of accounts is greater than 10, this parameter is ignored. +- `image_share_account` (array of string) - The IDs of to-be-added Aliyun + accounts to which the image is shared. The number of accounts is 1 to 10. If + number of accounts is greater than 10, this parameter is ignored. -- `image_copy_regions` (array of string) - Copy to the destination regionIds. +- `image_copy_regions` (array of string) - Copy to the destination regionIds. -- `image_copy_names` (array of string) - The name of the destination image, [2, - 128] English or Chinese characters. It must begin with an uppercase/lowercase - letter or a Chinese character, and may contain numbers, `_` or `-`. It cannot - begin with `http://` or `https://`. +- `image_copy_names` (array of string) - The name of the destination image, \[2, + 128\] English or Chinese characters. It must begin with an uppercase/lowercase + letter or a Chinese character, and may contain numbers, `_` or `-`. It cannot + begin with `http://` or `https://`. -- `image_force_delete` (bool) - If this value is true, when the target image name - is duplicated with an existing image, it will delete the existing image and - then create the target image, otherwise, the creation will fail. The default - value is false. +- `image_force_delete` (bool) - If this value is true, when the target image name + is duplicated with an existing image, it will delete the existing image and + then create the target image, otherwise, the creation will fail. The default + value is false. -- `image_force_delete_snapshots` (bool) - If this value is true, when delete the - duplicated existing image, the source snapshot of this image will be delete - either. +- `image_force_delete_snapshots` (bool) - If this value is true, when delete the + duplicated existing image, the source snapshot of this image will be delete + either. -- `disk_name` (string) - The value of disk name is blank by default. [2, 128] - English or Chinese characters, must begin with an uppercase/lowercase letter - or Chinese character. Can contain numbers, `.`, `_` and `-`. The disk name - will appear on the console. It cannot begin with http:// or https://. +- `disk_name` (string) - The value of disk name is blank by default. \[2, 128\] + English or Chinese characters, must begin with an uppercase/lowercase letter + or Chinese character. Can contain numbers, `.`, `_` and `-`. The disk name + will appear on the console. It cannot begin with `http://` or `https://`. -- `disk_category` (string) - Category of the data disk. Optional values are: - - cloud - general cloud disk - - cloud_efficiency - efficiency cloud disk - - cloud_ssd - cloud SSD +- `disk_category` (string) - Category of the data disk. Optional values are: + - cloud - general cloud disk + - cloud\_efficiency - efficiency cloud disk + - cloud\_ssd - cloud SSD Default value: cloud. -- `disk_size` (int) - Size of the system disk, in GB, values range: - - cloud - 5 ~ 2000 - - cloud_efficiency - 20 ~ 2048 - - cloud_ssd - 20 ~ 2048 +- `disk_size` (int) - Size of the system disk, in GB, values range: + - cloud - 5 ~ 2000 + - cloud\_efficiency - 20 ~ 2048 + - cloud\_ssd - 20 ~ 2048 The value should be equal to or greater than the size of the specific SnapshotId. -- `disk_snapshot_id` (string) - Snapshots are used to create the data disk - After this parameter is specified, Size is ignored. The actual size of the - created disk is the size of the specified snapshot. +- `disk_snapshot_id` (string) - Snapshots are used to create the data disk + After this parameter is specified, Size is ignored. The actual size of the + created disk is the size of the specified snapshot. Snapshots from on or before July 15, 2013 cannot be used to create a disk. -- `disk_description` (string) - The value of disk description is blank by default. [2, 256] characters. The disk description will appear on the console. It cannot begin with http:// or https://. +- `disk_description` (string) - The value of disk description is blank by default. \[2, 256\] characters. The disk description will appear on the console. It cannot begin with `http://` or `https://`. -- `disk_delete_with_instance` (string) - Whether or not the disk is released along with the instance: - - True indicates that when the instance is released, this disk will be released with it - - False indicates that when the instance is released, this disk will be retained. +- `disk_delete_with_instance` (string) - Whether or not the disk is released along with the instance: +- True indicates that when the instance is released, this disk will be released with it +- False indicates that when the instance is released, this disk will be retained. -- `disk_device` (string) - Device information of the related instance: such as - `/dev/xvdb` It is null unless the Status is In_use. +- `disk_device` (string) - Device information of the related instance: such as + `/dev/xvdb` It is null unless the Status is In\_use. -- `zone_id` (string) - ID of the zone to which the disk belongs. +- `zone_id` (string) - ID of the zone to which the disk belongs. -- `io_optimized` (string) - I/O optimized. Optional values are: - - none: none I/O Optimized - - optimized: I/O Optimized +- `io_optimized` (string) - I/O optimized. Optional values are: + - none: none I/O Optimized + - optimized: I/O Optimized Default value: none for Generation I instances; optimized for other instances. -- `force_stop_instance` (bool) - Whether to force shutdown upon device restart. - The default value is `false`. +- `force_stop_instance` (bool) - Whether to force shutdown upon device restart. + The default value is `false`. If it is set to `false`, the system is shut down normally; if it is set to `true`, the system is forced to shut down. -- `security_group_id` (string) - ID of the security group to which a newly - created instance belongs. Mutual access is allowed between instances in one - security group. If not specified, the newly created instance will be added to - the default security group. If the default group doesn’t exist, or the number - of instances in it has reached the maximum limit, a new security group will - be created automatically. +- `security_group_id` (string) - ID of the security group to which a newly + created instance belongs. Mutual access is allowed between instances in one + security group. If not specified, the newly created instance will be added to + the default security group. If the default group doesn’t exist, or the number + of instances in it has reached the maximum limit, a new security group will + be created automatically. -- `security_group_name` (string) - The security group name. The default value is - blank. [2, 128] English or Chinese characters, must begin with an - uppercase/lowercase letter or Chinese character. Can contain numbers, `.`, - `_` or `-`. It cannot begin with `http://` or `https://`. +- `security_group_name` (string) - The security group name. The default value is + blank. \[2, 128\] English or Chinese characters, must begin with an + uppercase/lowercase letter or Chinese character. Can contain numbers, `.`, + `_` or `-`. It cannot begin with `http://` or `https://`. -- `user_data` (string) - The UserData of an instance must be encoded in `Base64` - format, and the maximum size of the raw data is `16 KB`. +- `user_data` (string) - The UserData of an instance must be encoded in `Base64` + format, and the maximum size of the raw data is `16 KB`. -- `user_data_file` (string) - The file name of the userdata. +- `user_data_file` (string) - The file name of the userdata. -- `vpc_id` (string) - VPC ID allocated by the system. +- `vpc_id` (string) - VPC ID allocated by the system. -- `vpc_name` (string) - The VPC name. The default value is blank. [2, 128] - English or Chinese characters, must begin with an uppercase/lowercase letter - or Chinese character. Can contain numbers, `_` and `-`. The disk description - will appear on the console. Cannot begin with `http://` or `https://`. +- `vpc_name` (string) - The VPC name. The default value is blank. \[2, 128\] + English or Chinese characters, must begin with an uppercase/lowercase letter + or Chinese character. Can contain numbers, `_` and `-`. The disk description + will appear on the console. Cannot begin with `http://` or `https://`. -- `vpc_cidr_block` (string) - Value options: `192.168.0.0/16` and `172.16.0.0/16`. - When not specified, the default value is `172.16.0.0/16`. +- `vpc_cidr_block` (string) - Value options: `192.168.0.0/16` and `172.16.0.0/16`. + When not specified, the default value is `172.16.0.0/16`. -- `vswitch_id` (string) - The ID of the VSwitch to be used. +- `vswitch_id` (string) - The ID of the VSwitch to be used. -- `instance_name` (string) - Display name of the instance, which is a string of - 2 to 128 Chinese or English characters. It must begin with an - uppercase/lowercase letter or a Chinese character and can contain numerals, - `.`, `_`, or `-`. The instance name is displayed on the Alibaba Cloud - console. If this parameter is not specified, the default value is InstanceId - of the instance. It cannot begin with http:// or https://. +- `instance_name` (string) - Display name of the instance, which is a string of + 2 to 128 Chinese or English characters. It must begin with an + uppercase/lowercase letter or a Chinese character and can contain numerals, + `.`, `_`, or `-`. The instance name is displayed on the Alibaba Cloud + console. If this parameter is not specified, the default value is InstanceId + of the instance. It cannot begin with `http://` or `https://`. -- `internet_charge_type` (string) - Internet charge type, which can be - `PayByTraffic` or `PayByBandwidth`. Optional values: - - PayByBandwidth - - PayByTraffic +- `internet_charge_type` (string) - Internet charge type, which can be + `PayByTraffic` or `PayByBandwidth`. Optional values: + - PayByBandwidth + - PayByTraffic If this parameter is not specified, the default value is `PayByBandwidth`. - -- `internet_max_bandwidth_out` (string) - Maximum outgoing bandwidth to the public - network, measured in Mbps (Mega bit per second). +- `internet_max_bandwidth_out` (string) - Maximum outgoing bandwidth to the public + network, measured in Mbps (Mega bit per second). Value range: - - PayByBandwidth: [0, 100]. If this parameter is not specified, API automatically sets it to 0 Mbps. - - PayByTraffic: [1, 100]. If this parameter is not specified, an error is returned. - -- `temporary_key_pair_name` (string) - The name of the temporary key pair to - generate. By default, Packer generates a name that looks like `packer_`, - where `` is a 36 character unique identifier. + - PayByBandwidth: \[0, 100\]. If this parameter is not specified, API automatically sets it to 0 Mbps. + - PayByTraffic: \[1, 100\]. If this parameter is not specified, an error is returned. +- `temporary_key_pair_name` (string) - The name of the temporary key pair to + generate. By default, Packer generates a name that looks like `packer_`, + where `` is a 36 character unique identifier. ## Basic Example Here is a basic example for Alicloud. -```json +``` json { "variables": { "access_key": "{{env `ALICLOUD_ACCESS_KEY`}}", @@ -217,7 +213,6 @@ Here is a basic example for Alicloud. } ``` - See the [examples/alicloud](https://github.com/hashicorp/packer/tree/master/examples/alicloud) folder in the packer project for more examples. diff --git a/website/source/docs/builders/amazon-chroot.html.md b/website/source/docs/builders/amazon-chroot.html.md index 905dafb92..ee2629578 100644 --- a/website/source/docs/builders/amazon-chroot.html.md +++ b/website/source/docs/builders/amazon-chroot.html.md @@ -1,12 +1,12 @@ --- +description: | + The amazon-chroot Packer builder is able to create Amazon AMIs backed by an + EBS volume as the root device. For more information on the difference between + instance storage and EBS-backed instances, storage for the root device section + in the EC2 documentation. layout: docs -sidebar_current: docs-builders-amazon-chroot -page_title: Amazon chroot - Builders -description: |- - The amazon-chroot Packer builder is able to create Amazon AMIs backed by an - EBS volume as the root device. For more information on the difference between - instance storage and EBS-backed instances, storage for the root device section - in the EC2 documentation. +page_title: 'Amazon chroot - Builders' +sidebar_current: 'docs-builders-amazon-chroot' --- # AMI Builder (chroot) @@ -24,7 +24,7 @@ builder is able to build an EBS-backed AMI without launching a new EC2 instance. This can dramatically speed up AMI builds for organizations who need the extra fast build. -~> **This is an advanced builder** If you're just getting started with +~> **This is an advanced builder** If you're just getting started with Packer, we recommend starting with the [amazon-ebs builder](/docs/builders/amazon-ebs.html), which is much easier to use. @@ -57,216 +57,215 @@ each category, the available configuration keys are alphabetized. ### Required: -- `access_key` (string) - The access key used to communicate with AWS. [Learn - how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials) +- `access_key` (string) - The access key used to communicate with AWS. [Learn + how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials) -- `ami_name` (string) - The name of the resulting AMI that will appear when - managing AMIs in the AWS console or via APIs. This must be unique. To help - make this unique, use a function like `timestamp` (see [template - engine](/docs/templates/engine.html) for more info) +- `ami_name` (string) - The name of the resulting AMI that will appear when + managing AMIs in the AWS console or via APIs. This must be unique. To help + make this unique, use a function like `timestamp` (see [template + engine](/docs/templates/engine.html) for more info) -- `secret_key` (string) - The secret key used to communicate with AWS. [Learn - how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials) +- `secret_key` (string) - The secret key used to communicate with AWS. [Learn + how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials) -- `source_ami` (string) - The source AMI whose root volume will be copied and - provisioned on the currently running instance. This must be an EBS-backed AMI - with a root volume snapshot that you have access to. Note: this is not used - when `from_scratch` is set to true. +- `source_ami` (string) - The source AMI whose root volume will be copied and + provisioned on the currently running instance. This must be an EBS-backed AMI + with a root volume snapshot that you have access to. Note: this is not used + when `from_scratch` is set to true. ### Optional: -- `ami_description` (string) - The description to set for the +- `ami_description` (string) - The description to set for the resulting AMI(s). By default this description is empty. This is a [template engine](/docs/templates/engine.html) where the `SourceAMI` variable is replaced with the source AMI ID and `BuildRegion` variable is replaced with name of the region where this is built. -- `ami_groups` (array of strings) - A list of groups that have access to +- `ami_groups` (array of strings) - A list of groups that have access to launch the resulting AMI(s). By default no groups have permission to launch the AMI. `all` will make the AMI publicly accessible. -- `ami_product_codes` (array of strings) - A list of product codes to +- `ami_product_codes` (array of strings) - A list of product codes to associate with the AMI. By default no product codes are associated with the AMI. -- `ami_regions` (array of strings) - A list of regions to copy the AMI to. +- `ami_regions` (array of strings) - A list of regions to copy the AMI to. Tags and attributes are copied along with the AMI. AMI copying takes time depending on the size of the AMI, but will generally take many minutes. -- `ami_users` (array of strings) - A list of account IDs that have access to +- `ami_users` (array of strings) - A list of account IDs that have access to launch the resulting AMI(s). By default no additional users other than the user creating the AMI has permissions to launch it. -- `ami_virtualization_type` (string) - The type of virtualization for the AMI +- `ami_virtualization_type` (string) - The type of virtualization for the AMI you are building. This option is required to register HVM images. Can be "paravirtual" (default) or "hvm". -- `chroot_mounts` (array of array of strings) - This is a list of devices +- `chroot_mounts` (array of array of strings) - This is a list of devices to mount into the chroot environment. This configuration parameter requires some additional documentation which is in the "Chroot Mounts" section below. Please read that section for more information on how to use this. -- `command_wrapper` (string) - How to run shell commands. This defaults to +- `command_wrapper` (string) - How to run shell commands. This defaults to `{{.Command}}`. This may be useful to set if you want to set environmental variables or perhaps run it with `sudo` or so on. This is a configuration template where the `.Command` variable is replaced with the command to be run. Defaults to "{{.Command}}". -- `copy_files` (array of strings) - Paths to files on the running EC2 instance +- `copy_files` (array of strings) - Paths to files on the running EC2 instance that will be copied into the chroot environment prior to provisioning. Defaults to `/etc/resolv.conf` so that DNS lookups work. Pass an empty list to skip copying `/etc/resolv.conf`. You may need to do this if you're building an image that uses systemd. -- `custom_endpoint_ec2` (string) - this option is useful if you use +- `custom_endpoint_ec2` (string) - this option is useful if you use another cloud provider that provide a compatible API with aws EC2, - specify another endpoint like this "https://ec2.another.endpoint..com" + specify another endpoint like this "..com" -- `device_path` (string) - The path to the device where the root volume of the +- `device_path` (string) - The path to the device where the root volume of the source AMI will be attached. This defaults to "" (empty string), which forces Packer to find an open device automatically. -- `enhanced_networking` (boolean) - Enable enhanced - networking (SriovNetSupport and ENA) on HVM-compatible AMIs. If true, add - `ec2:ModifyInstanceAttribute` to your AWS IAM policy. Note: you must make - sure enhanced networking is enabled on your instance. See [Amazon's - documentation on enabling enhanced networking]( - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking) +- `enhanced_networking` (boolean) - Enable enhanced + networking (SriovNetSupport and ENA) on HVM-compatible AMIs. If true, add + `ec2:ModifyInstanceAttribute` to your AWS IAM policy. Note: you must make + sure enhanced networking is enabled on your instance. See [Amazon's + documentation on enabling enhanced networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking) -- `force_deregister` (boolean) - Force Packer to first deregister an existing +- `force_deregister` (boolean) - Force Packer to first deregister an existing AMI if one with the same name already exists. Default `false`. -- `force_delete_snapshot` (boolean) - Force Packer to delete snapshots associated with +- `force_delete_snapshot` (boolean) - Force Packer to delete snapshots associated with AMIs, which have been deregistered by `force_deregister`. Default `false`. -- `encrypt_boot` (boolean) - Instruct packer to automatically create a copy of the +- `encrypt_boot` (boolean) - Instruct packer to automatically create a copy of the AMI with an encrypted boot volume (discarding the initial unencrypted AMI in the process). Default `false`. -- `kms_key_id` (string) - The ID of the KMS key to use for boot volume encryption. +- `kms_key_id` (string) - The ID of the KMS key to use for boot volume encryption. This only applies to the main `region`, other regions where the AMI will be copied will be encrypted by the default EBS KMS key. -- `from_scratch` (boolean) - Build a new volume instead of starting from an +- `from_scratch` (boolean) - Build a new volume instead of starting from an existing AMI root volume snapshot. Default `false`. If true, `source_ami` is no longer used and the following options become required: `ami_virtualization_type`, `pre_mount_commands` and `root_volume_size`. The below options are also required in this mode only: -- `ami_block_device_mappings` (array of block device mappings) - Add one or +- `ami_block_device_mappings` (array of block device mappings) - Add one or more [block device mappings](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html) to the AMI. These will be attached when booting a new instance from your AMI. Your options here may vary depending on the type of VM you use. The block device mappings allow for the following configuration: - - `delete_on_termination` (boolean) - Indicates whether the EBS volume is + - `delete_on_termination` (boolean) - Indicates whether the EBS volume is deleted on instance termination. Default `false`. **NOTE**: If this value is not explicitly set to `true` and volumes are not cleaned up by an alternative method, additional volumes will accumulate after every build. - - `device_name` (string) - The device name exposed to the instance (for - example, `/dev/sdh` or `xvdh`). Required when specifying `volume_size`. + - `device_name` (string) - The device name exposed to the instance (for + example, `/dev/sdh` or `xvdh`). Required when specifying `volume_size`. - - `encrypted` (boolean) - Indicates whether to encrypt the volume or not + - `encrypted` (boolean) - Indicates whether to encrypt the volume or not - - `iops` (integer) - The number of I/O operations per second (IOPS) that the + - `iops` (integer) - The number of I/O operations per second (IOPS) that the volume supports. See the documentation on [IOPs](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html) for more information - - `no_device` (boolean) - Suppresses the specified device included in the + - `no_device` (boolean) - Suppresses the specified device included in the block device mapping of the AMI - - `snapshot_id` (string) - The ID of the snapshot + - `snapshot_id` (string) - The ID of the snapshot - - `virtual_name` (string) - The virtual device name. See the documentation on + - `virtual_name` (string) - The virtual device name. See the documentation on [Block Device Mapping](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html) for more information - - `volume_size` (integer) - The size of the volume, in GiB. Required if not + - `volume_size` (integer) - The size of the volume, in GiB. Required if not specifying a `snapshot_id` - - `volume_type` (string) - The volume type. gp2 for General Purpose (SSD) + - `volume_type` (string) - The volume type. gp2 for General Purpose (SSD) volumes, io1 for Provisioned IOPS (SSD) volumes, and standard for Magnetic volumes -- `region_kms_key_ids` (map of strings) - a map of regions to copy the ami to, - along with the custom kms key id to use for encryption for that region. - Keys must match the regions provided in `ami_regions`. If you just want to - encrypt using a default ID, you can stick with `kms_key_id` and `ami_regions`. - If you want a region to be encrypted with that region's default key ID, you can - use an empty string `""` instead of a key id in this map. (e.g. `"us-east-1": ""`) - However, you cannot use default key IDs if you are using this in conjunction with - `snapshot_users` -- in that situation you must use custom keys. +- `region_kms_key_ids` (map of strings) - a map of regions to copy the ami to, + along with the custom kms key id to use for encryption for that region. + Keys must match the regions provided in `ami_regions`. If you just want to + encrypt using a default ID, you can stick with `kms_key_id` and `ami_regions`. + If you want a region to be encrypted with that region's default key ID, you can + use an empty string `""` instead of a key id in this map. (e.g. `"us-east-1": ""`) + However, you cannot use default key IDs if you are using this in conjunction with + `snapshot_users` -- in that situation you must use custom keys. -- `root_device_name` (string) - The root device name. For example, `xvda`. +- `root_device_name` (string) - The root device name. For example, `xvda`. -- `mfa_code` (string) - The MFA [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm) - code. This should probably be a user variable since it changes all the time. +- `mfa_code` (string) - The MFA [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm) + code. This should probably be a user variable since it changes all the time. -- `mount_path` (string) - The path where the volume will be mounted. This is +- `mount_path` (string) - The path where the volume will be mounted. This is where the chroot environment will be. This defaults to `/mnt/packer-amazon-chroot-volumes/{{.Device}}`. This is a configuration template where the `.Device` variable is replaced with the name of the device where the volume is attached. -- `mount_partition` (integer) - The partition number containing the +- `mount_partition` (integer) - The partition number containing the / partition. By default this is the first partition of the volume. -- `mount_options` (array of strings) - Options to supply the `mount` command +- `mount_options` (array of strings) - Options to supply the `mount` command when mounting devices. Each option will be prefixed with `-o` and supplied to the `mount` command ran by Packer. Because this command is ran in a shell, user discrestion is advised. See [this manual page for the mount command](http://linuxcommand.org/man_pages/mount8.html) for valid file system specific options -- `pre_mount_commands` (array of strings) - A series of commands to execute +- `pre_mount_commands` (array of strings) - A series of commands to execute after attaching the root volume and before mounting the chroot. This is not required unless using `from_scratch`. If so, this should include any partitioning and filesystem creation commands. The path to the device is provided by `{{.Device}}`. -- `profile` (string) - The profile to use in the shared credentials file for +- `profile` (string) - The profile to use in the shared credentials file for AWS. See Amazon's documentation on [specifying profiles](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-profiles) for more details. -- `post_mount_commands` (array of strings) - As `pre_mount_commands`, but the +- `post_mount_commands` (array of strings) - As `pre_mount_commands`, but the commands are executed after mounting the root device and before the extra mount and copy steps. The device and mount path are provided by `{{.Device}}` and `{{.MountPath}}`. -- `root_volume_size` (integer) - The size of the root volume in GB for the +- `root_volume_size` (integer) - The size of the root volume in GB for the chroot environment and the resulting AMI. Default size is the snapshot size of the `source_ami` unless `from_scratch` is `true`, in which case this field must be defined. -- `skip_region_validation` (boolean) - Set to true if you want to skip +- `skip_region_validation` (boolean) - Set to true if you want to skip validation of the `ami_regions` configuration option. Default `false`. -- `snapshot_tags` (object of key/value strings) - Tags to apply to snapshot. +- `snapshot_tags` (object of key/value strings) - Tags to apply to snapshot. They will override AMI tags if already applied to snapshot. This is a [template engine](/docs/templates/engine.html) where the `SourceAMI` variable is replaced with the source AMI ID and `BuildRegion` variable is replaced with name of the region where this is built. -- `snapshot_groups` (array of strings) - A list of groups that have access to +- `snapshot_groups` (array of strings) - A list of groups that have access to create volumes from the snapshot(s). By default no groups have permission to create volumes form the snapshot(s). `all` will make the snapshot publicly accessible. -- `snapshot_users` (array of strings) - A list of account IDs that have access to +- `snapshot_users` (array of strings) - A list of account IDs that have access to create volumes from the snapshot(s). By default no additional users other than the user creating the AMI has permissions to create volumes from the backing snapshot(s). -- `source_ami_filter` (object) - Filters used to populate the `source_ami` field. +- `source_ami_filter` (object) - Filters used to populate the `source_ami` field. Example: - ```json + ``` json "source_ami_filter": { "filters": { "virtualization-type": "hvm", @@ -282,18 +281,18 @@ each category, the available configuration keys are alphabetized. NOTE: This will fail unless *exactly* one AMI is returned. In the above example, `most_recent` will cause this to succeed by selecting the newest image. - - `filters` (map of strings) - filters used to select a `source_ami`. - NOTE: This will fail unless *exactly* one AMI is returned. - Any filter described in the docs for [DescribeImages](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeImages.html) - is valid. + - `filters` (map of strings) - filters used to select a `source_ami`. + NOTE: This will fail unless *exactly* one AMI is returned. + Any filter described in the docs for [DescribeImages](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeImages.html) + is valid. - - `owners` (array of strings) - This scopes the AMIs to certain Amazon account IDs. - This is helpful to limit the AMIs to a trusted third party, or to your own account. + - `owners` (array of strings) - This scopes the AMIs to certain Amazon account IDs. + This is helpful to limit the AMIs to a trusted third party, or to your own account. - - `most_recent` (bool) - Selects the newest created image when true. - This is most useful for selecting a daily distro build. + - `most_recent` (bool) - Selects the newest created image when true. + This is most useful for selecting a daily distro build. -- `tags` (object of key/value strings) - Tags applied to the AMI. This is a +- `tags` (object of key/value strings) - Tags applied to the AMI. This is a [template engine](/docs/templates/engine.html) where the `SourceAMI` variable is replaced with the source AMI ID and `BuildRegion` variable is replaced with name of the region where this @@ -303,7 +302,7 @@ each category, the available configuration keys are alphabetized. Here is a basic example. It is completely valid except for the access keys: -```json +``` json { "type": "amazon-chroot", "access_key": "YOUR KEY HERE", @@ -319,18 +318,18 @@ The `chroot_mounts` configuration can be used to mount specific devices within the chroot. By default, the following additional mounts are added into the chroot by Packer: -- `/proc` (proc) -- `/sys` (sysfs) -- `/dev` (bind to real `/dev`) -- `/dev/pts` (devpts) -- `/proc/sys/fs/binfmt_misc` (binfmt\_misc) +- `/proc` (proc) +- `/sys` (sysfs) +- `/dev` (bind to real `/dev`) +- `/dev/pts` (devpts) +- `/proc/sys/fs/binfmt_misc` (binfmt\_misc) These default mounts are usually good enough for anyone and are sane defaults. However, if you want to change or add the mount points, you may using the `chroot_mounts` configuration. Here is an example configuration which only mounts `/prod` and `/dev`: -```json +``` json { "chroot_mounts": [ ["proc", "proc", "/proc"], @@ -342,12 +341,12 @@ mounts `/prod` and `/dev`: `chroot_mounts` is a list of a 3-tuples of strings. The three components of the 3-tuple, in order, are: -- The filesystem type. If this is "bind", then Packer will properly bind the +- The filesystem type. If this is "bind", then Packer will properly bind the filesystem to another mount point. -- The source device. +- The source device. -- The mount directory. +- The mount directory. ## Parallelism @@ -370,7 +369,7 @@ For debian based distributions you can setup a file which will prevent packages installed by your provisioners from starting services: -```json +``` json { "type": "shell", "inline": [ @@ -398,7 +397,7 @@ The device setup commands partition the device with one partition for use as an HVM image and format it ext4. This builder block should be followed by provisioning commands to install the os and bootloader. -```json +``` json { "type": "amazon-chroot", "ami_name": "packer-from-scratch {{timestamp}}", diff --git a/website/source/docs/builders/amazon-ebs.html.md b/website/source/docs/builders/amazon-ebs.html.md index 3d5f1606c..d360c87f4 100644 --- a/website/source/docs/builders/amazon-ebs.html.md +++ b/website/source/docs/builders/amazon-ebs.html.md @@ -1,12 +1,12 @@ --- +description: | + The amazon-ebs Packer builder is able to create Amazon AMIs backed by EBS + volumes for use in EC2. For more information on the difference between + EBS-backed instances and instance-store backed instances, see the storage for + the root device section in the EC2 documentation. layout: docs -sidebar_current: docs-builders-amazon-ebsbacked -page_title: Amazon EBS - Builders -description: |- - The amazon-ebs Packer builder is able to create Amazon AMIs backed by EBS - volumes for use in EC2. For more information on the difference between - EBS-backed instances and instance-store backed instances, see the storage for - the root device section in the EC2 documentation. +page_title: 'Amazon EBS - Builders' +sidebar_current: 'docs-builders-amazon-ebsbacked' --- # AMI Builder (EBS backed) @@ -29,7 +29,7 @@ bit. The builder does *not* manage AMIs. Once it creates an AMI and stores it in your account, it is up to you to use, delete, etc. the AMI. --> **Note:** Temporary resources are, by default, all created with the prefix +-> **Note:** Temporary resources are, by default, all created with the prefix `packer`. This can be useful if you want to restrict the security groups and key pairs Packer is able to operate on. @@ -45,222 +45,221 @@ builder. ### Required: -- `access_key` (string) - The access key used to communicate with AWS. [Learn - how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials) +- `access_key` (string) - The access key used to communicate with AWS. [Learn + how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials) -- `ami_name` (string) - The name of the resulting AMI that will appear when - managing AMIs in the AWS console or via APIs. This must be unique. To help - make this unique, use a function like `timestamp` (see [template - engine](/docs/templates/engine.html) for more info) +- `ami_name` (string) - The name of the resulting AMI that will appear when + managing AMIs in the AWS console or via APIs. This must be unique. To help + make this unique, use a function like `timestamp` (see [template + engine](/docs/templates/engine.html) for more info) -- `instance_type` (string) - The EC2 instance type to use while building the - AMI, such as `t2.small`. +- `instance_type` (string) - The EC2 instance type to use while building the + AMI, such as `t2.small`. -- `region` (string) - The name of the region, such as `us-east-1`, in which to - launch the EC2 instance to create the AMI. +- `region` (string) - The name of the region, such as `us-east-1`, in which to + launch the EC2 instance to create the AMI. -- `secret_key` (string) - The secret key used to communicate with AWS. [Learn - how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials) +- `secret_key` (string) - The secret key used to communicate with AWS. [Learn + how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials) -- `source_ami` (string) - The initial AMI used as a base for the newly - created machine. `source_ami_filter` may be used instead to populate this - automatically. +- `source_ami` (string) - The initial AMI used as a base for the newly + created machine. `source_ami_filter` may be used instead to populate this + automatically. ### Optional: -- `ami_block_device_mappings` (array of block device mappings) - Add one or - more [block device mappings](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html) - to the AMI. These will be attached when booting a new instance from your - AMI. To add a block device during the Packer build see - `launch_block_device_mappings` below. Your options here may vary depending - on the type of VM you use. The block device mappings allow for the following - configuration: +- `ami_block_device_mappings` (array of block device mappings) - Add one or + more [block device mappings](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html) + to the AMI. These will be attached when booting a new instance from your + AMI. To add a block device during the Packer build see + `launch_block_device_mappings` below. Your options here may vary depending + on the type of VM you use. The block device mappings allow for the following + configuration: - - `delete_on_termination` (boolean) - Indicates whether the EBS volume is - deleted on instance termination. Default `false`. **NOTE**: If this - value is not explicitly set to `true` and volumes are not cleaned up by - an alternative method, additional volumes will accumulate after - every build. +- `delete_on_termination` (boolean) - Indicates whether the EBS volume is + deleted on instance termination. Default `false`. **NOTE**: If this + value is not explicitly set to `true` and volumes are not cleaned up by + an alternative method, additional volumes will accumulate after + every build. - - `device_name` (string) - The device name exposed to the instance (for - example, `/dev/sdh` or `xvdh`). Required when specifying `volume_size`. +- `device_name` (string) - The device name exposed to the instance (for + example, `/dev/sdh` or `xvdh`). Required when specifying `volume_size`. - - `encrypted` (boolean) - Indicates whether to encrypt the volume or not +- `encrypted` (boolean) - Indicates whether to encrypt the volume or not - - `iops` (integer) - The number of I/O operations per second (IOPS) that the - volume supports. See the documentation on - [IOPs](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html) - for more information +- `iops` (integer) - The number of I/O operations per second (IOPS) that the + volume supports. See the documentation on + [IOPs](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html) + for more information - - `no_device` (boolean) - Suppresses the specified device included in the - block device mapping of the AMI +- `no_device` (boolean) - Suppresses the specified device included in the + block device mapping of the AMI - - `snapshot_id` (string) - The ID of the snapshot +- `snapshot_id` (string) - The ID of the snapshot - - `virtual_name` (string) - The virtual device name. See the documentation on - [Block Device - Mapping](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html) - for more information +- `virtual_name` (string) - The virtual device name. See the documentation on + [Block Device + Mapping](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html) + for more information - - `volume_size` (integer) - The size of the volume, in GiB. Required if not - specifying a `snapshot_id` +- `volume_size` (integer) - The size of the volume, in GiB. Required if not + specifying a `snapshot_id` - - `volume_type` (string) - The volume type. `gp2` for General Purpose (SSD) - volumes, `io1` for Provisioned IOPS (SSD) volumes, and `standard` for Magnetic - volumes +- `volume_type` (string) - The volume type. `gp2` for General Purpose (SSD) + volumes, `io1` for Provisioned IOPS (SSD) volumes, and `standard` for Magnetic + volumes -- `ami_description` (string) - The description to set for the - resulting AMI(s). By default this description is empty. This is a - [template engine](/docs/templates/engine.html) - where the `SourceAMI` variable is replaced with the source AMI ID and - `BuildRegion` variable is replaced with the value of `region`. +- `ami_description` (string) - The description to set for the + resulting AMI(s). By default this description is empty. This is a + [template engine](/docs/templates/engine.html) + where the `SourceAMI` variable is replaced with the source AMI ID and + `BuildRegion` variable is replaced with the value of `region`. -- `ami_groups` (array of strings) - A list of groups that have access to - launch the resulting AMI(s). By default no groups have permission to launch - the AMI. `all` will make the AMI publicly accessible. AWS currently doesn't - accept any value other than `all`. +- `ami_groups` (array of strings) - A list of groups that have access to + launch the resulting AMI(s). By default no groups have permission to launch + the AMI. `all` will make the AMI publicly accessible. AWS currently doesn't + accept any value other than `all`. -- `ami_product_codes` (array of strings) - A list of product codes to - associate with the AMI. By default no product codes are associated with - the AMI. +- `ami_product_codes` (array of strings) - A list of product codes to + associate with the AMI. By default no product codes are associated with + the AMI. -- `ami_regions` (array of strings) - A list of regions to copy the AMI to. - Tags and attributes are copied along with the AMI. AMI copying takes time - depending on the size of the AMI, but will generally take many minutes. +- `ami_regions` (array of strings) - A list of regions to copy the AMI to. + Tags and attributes are copied along with the AMI. AMI copying takes time + depending on the size of the AMI, but will generally take many minutes. -- `ami_users` (array of strings) - A list of account IDs that have access to - launch the resulting AMI(s). By default no additional users other than the - user creating the AMI has permissions to launch it. +- `ami_users` (array of strings) - A list of account IDs that have access to + launch the resulting AMI(s). By default no additional users other than the + user creating the AMI has permissions to launch it. -- `ami_virtualization_type` (string) - The type of virtualization for the AMI - you are building. This option must match the supported virtualization - type of `source_ami`. Can be `paravirtual` or `hvm`. +- `ami_virtualization_type` (string) - The type of virtualization for the AMI + you are building. This option must match the supported virtualization + type of `source_ami`. Can be `paravirtual` or `hvm`. -- `associate_public_ip_address` (boolean) - If using a non-default VPC, public - IP addresses are not provided by default. If this is toggled, your new - instance will get a Public IP. +- `associate_public_ip_address` (boolean) - If using a non-default VPC, public + IP addresses are not provided by default. If this is toggled, your new + instance will get a Public IP. -- `availability_zone` (string) - Destination availability zone to launch - instance in. Leave this empty to allow Amazon to auto-assign. +- `availability_zone` (string) - Destination availability zone to launch + instance in. Leave this empty to allow Amazon to auto-assign. -- `custom_endpoint_ec2` (string) - this option is useful if you use +- `custom_endpoint_ec2` (string) - this option is useful if you use another cloud provider that provide a compatible API with aws EC2, - specify another endpoint like this "https://ec2.another.endpoint..com" + specify another endpoint like this "..com" -- `disable_stop_instance` (boolean) - Packer normally stops the build instance - after all provisioners have run. For Windows instances, it is sometimes - desirable to [run Sysprep](http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ami-create-standard.html) - which will stop the instance for you. If this is set to true, Packer *will not* - stop the instance and will wait for you to stop it manually. You can do this - with a [windows-shell provisioner](https://www.packer.io/docs/provisioners/windows-shell.html). +- `disable_stop_instance` (boolean) - Packer normally stops the build instance + after all provisioners have run. For Windows instances, it is sometimes + desirable to [run Sysprep](http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ami-create-standard.html) + which will stop the instance for you. If this is set to true, Packer *will not* + stop the instance and will wait for you to stop it manually. You can do this + with a [windows-shell provisioner](https://www.packer.io/docs/provisioners/windows-shell.html). - ```json + ``` json { "type": "windows-shell", "inline": ["\"c:\\Program Files\\Amazon\\Ec2ConfigService\\ec2config.exe\" -sysprep"] } ``` -- `ebs_optimized` (boolean) - Mark instance as [EBS - Optimized](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html). - Default `false`. +- `ebs_optimized` (boolean) - Mark instance as [EBS + Optimized](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html). + Default `false`. -- `enhanced_networking` (boolean) - Enable enhanced - networking (SriovNetSupport and ENA) on HVM-compatible AMIs. If true, add - `ec2:ModifyInstanceAttribute` to your AWS IAM policy. Note: you must make - sure enhanced networking is enabled on your instance. See [Amazon's - documentation on enabling enhanced networking]( - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking) +- `enhanced_networking` (boolean) - Enable enhanced + networking (SriovNetSupport and ENA) on HVM-compatible AMIs. If true, add + `ec2:ModifyInstanceAttribute` to your AWS IAM policy. Note: you must make + sure enhanced networking is enabled on your instance. See [Amazon's + documentation on enabling enhanced networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking) -- `force_deregister` (boolean) - Force Packer to first deregister an existing - AMI if one with the same name already exists. Default `false`. +- `force_deregister` (boolean) - Force Packer to first deregister an existing + AMI if one with the same name already exists. Default `false`. -- `force_delete_snapshot` (boolean) - Force Packer to delete snapshots associated with - AMIs, which have been deregistered by `force_deregister`. Default `false`. +- `force_delete_snapshot` (boolean) - Force Packer to delete snapshots associated with + AMIs, which have been deregistered by `force_deregister`. Default `false`. -- `encrypt_boot` (boolean) - Instruct packer to automatically create a copy of the - AMI with an encrypted boot volume (discarding the initial unencrypted AMI in the - process). Default `false`. +- `encrypt_boot` (boolean) - Instruct packer to automatically create a copy of the + AMI with an encrypted boot volume (discarding the initial unencrypted AMI in the + process). Default `false`. -- `kms_key_id` (string) - The ID of the KMS key to use for boot volume encryption. - This only applies to the main `region`, other regions where the AMI will be copied - will be encrypted by the default EBS KMS key. +- `kms_key_id` (string) - The ID of the KMS key to use for boot volume encryption. + This only applies to the main `region`, other regions where the AMI will be copied + will be encrypted by the default EBS KMS key. -- `iam_instance_profile` (string) - The name of an [IAM instance - profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html) - to launch the EC2 instance with. +- `iam_instance_profile` (string) - The name of an [IAM instance + profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html) + to launch the EC2 instance with. -- `launch_block_device_mappings` (array of block device mappings) - Add one or - more block devices before the Packer build starts. These are not necessarily - preserved when booting from the AMI built with Packer. See - `ami_block_device_mappings`, above, for details. +- `launch_block_device_mappings` (array of block device mappings) - Add one or + more block devices before the Packer build starts. These are not necessarily + preserved when booting from the AMI built with Packer. See + `ami_block_device_mappings`, above, for details. -- `mfa_code` (string) - The MFA [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm) - code. This should probably be a user variable since it changes all the time. +- `mfa_code` (string) - The MFA [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm) + code. This should probably be a user variable since it changes all the time. -- `profile` (string) - The profile to use in the shared credentials file for - AWS. See Amazon's documentation on [specifying - profiles](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-profiles) - for more details. +- `profile` (string) - The profile to use in the shared credentials file for + AWS. See Amazon's documentation on [specifying + profiles](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-profiles) + for more details. -- `region_kms_key_ids` (map of strings) - a map of regions to copy the ami to, - along with the custom kms key id to use for encryption for that region. - Keys must match the regions provided in `ami_regions`. If you just want to - encrypt using a default ID, you can stick with `kms_key_id` and `ami_regions`. - If you want a region to be encrypted with that region's default key ID, you can - use an empty string `""` instead of a key id in this map. (e.g. `"us-east-1": ""`) - However, you cannot use default key IDs if you are using this in conjunction with - `snapshot_users` -- in that situation you must use custom keys. +- `region_kms_key_ids` (map of strings) - a map of regions to copy the ami to, + along with the custom kms key id to use for encryption for that region. + Keys must match the regions provided in `ami_regions`. If you just want to + encrypt using a default ID, you can stick with `kms_key_id` and `ami_regions`. + If you want a region to be encrypted with that region's default key ID, you can + use an empty string `""` instead of a key id in this map. (e.g. `"us-east-1": ""`) + However, you cannot use default key IDs if you are using this in conjunction with + `snapshot_users` -- in that situation you must use custom keys. -- `run_tags` (object of key/value strings) - Tags to apply to the instance - that is *launched* to create the AMI. These tags are *not* applied to the - resulting AMI unless they're duplicated in `tags`. This is a - [template engine](/docs/templates/engine.html) - where the `SourceAMI` variable is replaced with the source AMI ID and - `BuildRegion` variable is replaced with the value of `region`. +- `run_tags` (object of key/value strings) - Tags to apply to the instance + that is *launched* to create the AMI. These tags are *not* applied to the + resulting AMI unless they're duplicated in `tags`. This is a + [template engine](/docs/templates/engine.html) + where the `SourceAMI` variable is replaced with the source AMI ID and + `BuildRegion` variable is replaced with the value of `region`. -- `run_volume_tags` (object of key/value strings) - Tags to apply to the volumes - that are *launched* to create the AMI. These tags are *not* applied to the - resulting AMI unless they're duplicated in `tags`. This is a - [template engine](/docs/templates/engine.html) - where the `SourceAMI` variable is replaced with the source AMI ID and - `BuildRegion` variable is replaced with the value of `region`. +- `run_volume_tags` (object of key/value strings) - Tags to apply to the volumes + that are *launched* to create the AMI. These tags are *not* applied to the + resulting AMI unless they're duplicated in `tags`. This is a + [template engine](/docs/templates/engine.html) + where the `SourceAMI` variable is replaced with the source AMI ID and + `BuildRegion` variable is replaced with the value of `region`. -- `security_group_id` (string) - The ID (*not* the name) of the security group - to assign to the instance. By default this is not set and Packer will - automatically create a new temporary security group to allow SSH access. - Note that if this is specified, you must be sure the security group allows - access to the `ssh_port` given below. +- `security_group_id` (string) - The ID (*not* the name) of the security group + to assign to the instance. By default this is not set and Packer will + automatically create a new temporary security group to allow SSH access. + Note that if this is specified, you must be sure the security group allows + access to the `ssh_port` given below. -- `security_group_ids` (array of strings) - A list of security groups as - described above. Note that if this is specified, you must omit the - `security_group_id`. +- `security_group_ids` (array of strings) - A list of security groups as + described above. Note that if this is specified, you must omit the + `security_group_id`. -- `shutdown_behavior` (string) - Automatically terminate instances on shutdown - in case Packer exits ungracefully. Possible values are "stop" and "terminate", - default is `stop`. +- `shutdown_behavior` (string) - Automatically terminate instances on shutdown + in case Packer exits ungracefully. Possible values are "stop" and "terminate", + default is `stop`. -- `skip_region_validation` (boolean) - Set to true if you want to skip - validation of the region configuration option. Default `false`. +- `skip_region_validation` (boolean) - Set to true if you want to skip + validation of the region configuration option. Default `false`. -- `snapshot_groups` (array of strings) - A list of groups that have access to - create volumes from the snapshot(s). By default no groups have permission to create - volumes form the snapshot(s). `all` will make the snapshot publicly accessible. +- `snapshot_groups` (array of strings) - A list of groups that have access to + create volumes from the snapshot(s). By default no groups have permission to create + volumes form the snapshot(s). `all` will make the snapshot publicly accessible. -- `snapshot_users` (array of strings) - A list of account IDs that have access to - create volumes from the snapshot(s). By default no additional users other than the - user creating the AMI has permissions to create volumes from the backing snapshot(s). +- `snapshot_users` (array of strings) - A list of account IDs that have access to + create volumes from the snapshot(s). By default no additional users other than the + user creating the AMI has permissions to create volumes from the backing snapshot(s). -- `snapshot_tags` (object of key/value strings) - Tags to apply to snapshot. - They will override AMI tags if already applied to snapshot. This is a - [template engine](/docs/templates/engine.html) - where the `SourceAMI` variable is replaced with the source AMI ID and - `BuildRegion` variable is replaced with the value of `region`. +- `snapshot_tags` (object of key/value strings) - Tags to apply to snapshot. + They will override AMI tags if already applied to snapshot. This is a + [template engine](/docs/templates/engine.html) + where the `SourceAMI` variable is replaced with the source AMI ID and + `BuildRegion` variable is replaced with the value of `region`. -- `source_ami_filter` (object) - Filters used to populate the `source_ami` field. +- `source_ami_filter` (object) - Filters used to populate the `source_ami` field. Example: - ```json + ``` json { "source_ami_filter": { "filters": { @@ -278,89 +277,89 @@ builder. NOTE: This will fail unless *exactly* one AMI is returned. In the above example, `most_recent` will cause this to succeed by selecting the newest image. - - `filters` (map of strings) - filters used to select a `source_ami`. - NOTE: This will fail unless *exactly* one AMI is returned. - Any filter described in the docs for [DescribeImages](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeImages.html) - is valid. + - `filters` (map of strings) - filters used to select a `source_ami`. + NOTE: This will fail unless *exactly* one AMI is returned. + Any filter described in the docs for [DescribeImages](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeImages.html) + is valid. - - `owners` (array of strings) - This scopes the AMIs to certain Amazon account IDs. - This is helpful to limit the AMIs to a trusted third party, or to your own account. + - `owners` (array of strings) - This scopes the AMIs to certain Amazon account IDs. + This is helpful to limit the AMIs to a trusted third party, or to your own account. - - `most_recent` (bool) - Selects the newest created image when true. - This is most useful for selecting a daily distro build. + - `most_recent` (bool) - Selects the newest created image when true. + This is most useful for selecting a daily distro build. -- `spot_price` (string) - The maximum hourly price to pay for a spot instance - to create the AMI. Spot instances are a type of instance that EC2 starts - when the current spot price is less than the maximum price you specify. Spot - price will be updated based on available spot instance capacity and current - spot instance requests. It may save you some costs. You can set this to - `auto` for Packer to automatically discover the best spot price or to "0" - to use an on demand instance (default). +- `spot_price` (string) - The maximum hourly price to pay for a spot instance + to create the AMI. Spot instances are a type of instance that EC2 starts + when the current spot price is less than the maximum price you specify. Spot + price will be updated based on available spot instance capacity and current + spot instance requests. It may save you some costs. You can set this to + `auto` for Packer to automatically discover the best spot price or to "0" + to use an on demand instance (default). -- `spot_price_auto_product` (string) - Required if `spot_price` is set - to `auto`. This tells Packer what sort of AMI you're launching to find the - best spot price. This must be one of: `Linux/UNIX`, `SUSE Linux`, `Windows`, - `Linux/UNIX (Amazon VPC)`, `SUSE Linux (Amazon VPC)`, `Windows (Amazon VPC)` +- `spot_price_auto_product` (string) - Required if `spot_price` is set + to `auto`. This tells Packer what sort of AMI you're launching to find the + best spot price. This must be one of: `Linux/UNIX`, `SUSE Linux`, `Windows`, + `Linux/UNIX (Amazon VPC)`, `SUSE Linux (Amazon VPC)`, `Windows (Amazon VPC)` -- `ssh_keypair_name` (string) - If specified, this is the key that will be - used for SSH with the machine. The key must match a key pair name loaded - up into Amazon EC2. By default, this is blank, and Packer will - generate a temporary keypair unless - [`ssh_password`](/docs/templates/communicator.html#ssh_password) is used. - [`ssh_private_key_file`](/docs/templates/communicator.html#ssh_private_key_file) - or `ssh_agent_auth` must be specified when `ssh_keypair_name` is utilized. +- `ssh_keypair_name` (string) - If specified, this is the key that will be + used for SSH with the machine. The key must match a key pair name loaded + up into Amazon EC2. By default, this is blank, and Packer will + generate a temporary keypair unless + [`ssh_password`](/docs/templates/communicator.html#ssh_password) is used. + [`ssh_private_key_file`](/docs/templates/communicator.html#ssh_private_key_file) + or `ssh_agent_auth` must be specified when `ssh_keypair_name` is utilized. -- `ssh_agent_auth` (boolean) - If true, the local SSH agent will be used to - authenticate connections to the source instance. No temporary keypair will - be created, and the values of `ssh_password` and `ssh_private_key_file` will - be ignored. To use this option with a key pair already configured in the source - AMI, leave the `ssh_keypair_name` blank. To associate an existing key pair - in AWS with the source instance, set the `ssh_keypair_name` field to the name - of the key pair. +- `ssh_agent_auth` (boolean) - If true, the local SSH agent will be used to + authenticate connections to the source instance. No temporary keypair will + be created, and the values of `ssh_password` and `ssh_private_key_file` will + be ignored. To use this option with a key pair already configured in the source + AMI, leave the `ssh_keypair_name` blank. To associate an existing key pair + in AWS with the source instance, set the `ssh_keypair_name` field to the name + of the key pair. -- `ssh_private_ip` (boolean) - If true, then SSH will always use the private - IP if available. Also works for WinRM. +- `ssh_private_ip` (boolean) - If true, then SSH will always use the private + IP if available. Also works for WinRM. -- `subnet_id` (string) - If using VPC, the ID of the subnet, such as - `subnet-12345def`, where Packer will launch the EC2 instance. This field is - required if you are using an non-default VPC. +- `subnet_id` (string) - If using VPC, the ID of the subnet, such as + `subnet-12345def`, where Packer will launch the EC2 instance. This field is + required if you are using an non-default VPC. -- `tags` (object of key/value strings) - Tags applied to the AMI and - relevant snapshots. This is a - [template engine](/docs/templates/engine.html) - where the `SourceAMI` variable is replaced with the source AMI ID and - `BuildRegion` variable is replaced with the value of `region`. +- `tags` (object of key/value strings) - Tags applied to the AMI and + relevant snapshots. This is a + [template engine](/docs/templates/engine.html) + where the `SourceAMI` variable is replaced with the source AMI ID and + `BuildRegion` variable is replaced with the value of `region`. -- `temporary_key_pair_name` (string) - The name of the temporary key pair - to generate. By default, Packer generates a name that looks like - `packer_`, where \ is a 36 character unique identifier. +- `temporary_key_pair_name` (string) - The name of the temporary key pair + to generate. By default, Packer generates a name that looks like + `packer_`, where <UUID> is a 36 character unique identifier. -- `token` (string) - The access token to use. This is different from the - access key and secret key. If you're not sure what this is, then you - probably don't need it. This will also be read from the `AWS_SESSION_TOKEN` - environmental variable. +- `token` (string) - The access token to use. This is different from the + access key and secret key. If you're not sure what this is, then you + probably don't need it. This will also be read from the `AWS_SESSION_TOKEN` + environmental variable. -- `user_data` (string) - User data to apply when launching the instance. Note - that you need to be careful about escaping characters due to the templates - being JSON. It is often more convenient to use `user_data_file`, instead. +- `user_data` (string) - User data to apply when launching the instance. Note + that you need to be careful about escaping characters due to the templates + being JSON. It is often more convenient to use `user_data_file`, instead. -- `user_data_file` (string) - Path to a file that will be used for the user - data when launching the instance. +- `user_data_file` (string) - Path to a file that will be used for the user + data when launching the instance. -- `vpc_id` (string) - If launching into a VPC subnet, Packer needs the VPC ID - in order to create a temporary security group within the VPC. Requires `subnet_id` - to be set. If this field is left blank, Packer will try to get the VPC ID from the - `subnet_id`. +- `vpc_id` (string) - If launching into a VPC subnet, Packer needs the VPC ID + in order to create a temporary security group within the VPC. Requires `subnet_id` + to be set. If this field is left blank, Packer will try to get the VPC ID from the + `subnet_id`. -- `windows_password_timeout` (string) - The timeout for waiting for a Windows - password for Windows instances. Defaults to 20 minutes. Example value: `10m` +- `windows_password_timeout` (string) - The timeout for waiting for a Windows + password for Windows instances. Defaults to 20 minutes. Example value: `10m` ## Basic Example Here is a basic example. You will need to provide access keys, and may need to change the AMI IDs according to what images exist at the time the template is run: -```json +``` json { "type": "amazon-ebs", "access_key": "YOUR KEY HERE", @@ -373,7 +372,7 @@ change the AMI IDs according to what images exist at the time the template is ru } ``` --> **Note:** Packer can also read the access key and secret access key from +-> **Note:** Packer can also read the access key and secret access key from environmental variables. See the configuration reference in the section above for more information on what environmental variables Packer will look for. @@ -397,7 +396,7 @@ configuration of `launch_block_device_mappings` will expand the root volume `ami_block_device_mappings` AWS will attach additional volumes `/dev/sdb` and `/dev/sdc` when we boot a new instance of our AMI. -```json +``` json { "type": "amazon-ebs", "access_key": "YOUR KEY HERE", @@ -435,7 +434,7 @@ Here is an example using the optional AMI tags. This will add the tags provide your access keys, and may need to change the source AMI ID based on what images exist when this template is run: -```json +``` json { "type": "amazon-ebs", "access_key": "YOUR KEY HERE", @@ -452,7 +451,7 @@ images exist when this template is run: } ``` --> **Note:** Packer uses pre-built AMIs as the source for building images. +-> **Note:** Packer uses pre-built AMIs as the source for building images. These source AMIs may include volumes that are not flagged to be destroyed on termination of the instance building the new image. Packer will attempt to clean up all residual volumes that are not designated by the user to remain after diff --git a/website/source/docs/builders/amazon-ebssurrogate.html.md b/website/source/docs/builders/amazon-ebssurrogate.html.md index ec403d1ea..abe3cac8a 100644 --- a/website/source/docs/builders/amazon-ebssurrogate.html.md +++ b/website/source/docs/builders/amazon-ebssurrogate.html.md @@ -1,10 +1,10 @@ --- +description: | + The amazon-ebssurrogate Packer builder is like the chroot builder, but does + not require running inside an EC2 instance. layout: docs -sidebar_current: docs-builders-amazon-ebssurrogate -page_title: Amazon EBS Surrogate - Builders -description: |- - The amazon-ebssurrogate Packer builder is like the chroot builder, but does - not require running inside an EC2 instance. +page_title: 'Amazon EBS Surrogate - Builders' +sidebar_current: 'docs-builders-amazon-ebssurrogate' --- # EBS Surrogate Builder @@ -35,33 +35,33 @@ builder. ### Required: -- `access_key` (string) - The access key used to communicate with AWS. [Learn +- `access_key` (string) - The access key used to communicate with AWS. [Learn how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials) -- `instance_type` (string) - The EC2 instance type to use while building the +- `instance_type` (string) - The EC2 instance type to use while building the AMI, such as `m1.small`. -- `region` (string) - The name of the region, such as `us-east-1`, in which to +- `region` (string) - The name of the region, such as `us-east-1`, in which to launch the EC2 instance to create the AMI. -- `secret_key` (string) - The secret key used to communicate with AWS. [Learn +- `secret_key` (string) - The secret key used to communicate with AWS. [Learn how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials) -- `source_ami` (string) - The initial AMI used as a base for the newly +- `source_ami` (string) - The initial AMI used as a base for the newly created machine. `source_ami_filter` may be used instead to populate this automatically. -- `ami_root_device` (block device mapping) - A block device mapping describing +- `ami_root_device` (block device mapping) - A block device mapping describing the root device of the AMI. This looks like the mappings in `ami_block_device_mapping`, except with an additional field: -- `source_device_name` (string) - The device name of the block device on the +- `source_device_name` (string) - The device name of the block device on the source instance to be used as the root device for the AMI. This must correspond to a block device in `launch_block_device_mapping`. ### Optional: -- `ami_block_device_mappings` (array of block device mappings) - Add one or +- `ami_block_device_mappings` (array of block device mappings) - Add one or more [block device mappings](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html) to the AMI. These will be attached when booting a new instance from your AMI. To add a block device during the packer build see @@ -69,134 +69,133 @@ builder. on the type of VM you use. The block device mappings allow for the following configuration: - - `delete_on_termination` (boolean) - Indicates whether the EBS volume is + - `delete_on_termination` (boolean) - Indicates whether the EBS volume is deleted on instance termination. Default `false`. **NOTE**: If this value is not explicitly set to `true` and volumes are not cleaned up by an alternative method, additional volumes will accumulate after every build. - - `device_name` (string) - The device name exposed to the instance (for - example, `/dev/sdh` or `xvdh`). Required when specifying `volume_size`. + - `device_name` (string) - The device name exposed to the instance (for + example, `/dev/sdh` or `xvdh`). Required when specifying `volume_size`. - - `encrypted` (boolean) - Indicates whether to encrypt the volume or not + - `encrypted` (boolean) - Indicates whether to encrypt the volume or not - - `iops` (integer) - The number of I/O operations per second (IOPS) that the + - `iops` (integer) - The number of I/O operations per second (IOPS) that the volume supports. See the documentation on [IOPs](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html) for more information - - `no_device` (boolean) - Suppresses the specified device included in the + - `no_device` (boolean) - Suppresses the specified device included in the block device mapping of the AMI - - `snapshot_id` (string) - The ID of the snapshot + - `snapshot_id` (string) - The ID of the snapshot - - `virtual_name` (string) - The virtual device name. See the documentation on + - `virtual_name` (string) - The virtual device name. See the documentation on [Block Device Mapping](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html) for more information - - `volume_size` (integer) - The size of the volume, in GiB. Required if not + - `volume_size` (integer) - The size of the volume, in GiB. Required if not specifying a `snapshot_id` - - `volume_type` (string) - The volume type. `gp2` for General Purpose (SSD) + - `volume_type` (string) - The volume type. `gp2` for General Purpose (SSD) volumes, `io1` for Provisioned IOPS (SSD) volumes, and `standard` for Magnetic volumes -- `ami_description` (string) - The description to set for the +- `ami_description` (string) - The description to set for the resulting AMI(s). By default this description is empty. This is a [template engine](/docs/templates/engine.html) where the `SourceAMI` variable is replaced with the source AMI ID and `BuildRegion` variable is replaced with the value of `region`. -- `ami_groups` (array of strings) - A list of groups that have access to +- `ami_groups` (array of strings) - A list of groups that have access to launch the resulting AMI(s). By default no groups have permission to launch the AMI. `all` will make the AMI publicly accessible. AWS currently doesn't accept any value other than `all`. -- `ami_product_codes` (array of strings) - A list of product codes to +- `ami_product_codes` (array of strings) - A list of product codes to associate with the AMI. By default no product codes are associated with the AMI. -- `ami_regions` (array of strings) - A list of regions to copy the AMI to. +- `ami_regions` (array of strings) - A list of regions to copy the AMI to. Tags and attributes are copied along with the AMI. AMI copying takes time depending on the size of the AMI, but will generally take many minutes. -- `ami_users` (array of strings) - A list of account IDs that have access to +- `ami_users` (array of strings) - A list of account IDs that have access to launch the resulting AMI(s). By default no additional users other than the user creating the AMI has permissions to launch it. -- `ami_virtualization_type` (string) - The type of virtualization for the AMI +- `ami_virtualization_type` (string) - The type of virtualization for the AMI you are building. This option must match the supported virtualization type of `source_ami`. Can be `paravirtual` or `hvm`. -- `associate_public_ip_address` (boolean) - If using a non-default VPC, public +- `associate_public_ip_address` (boolean) - If using a non-default VPC, public IP addresses are not provided by default. If this is toggled, your new instance will get a Public IP. -- `availability_zone` (string) - Destination availability zone to launch +- `availability_zone` (string) - Destination availability zone to launch instance in. Leave this empty to allow Amazon to auto-assign. -- `custom_endpoint_ec2` (string) - this option is useful if you use +- `custom_endpoint_ec2` (string) - this option is useful if you use another cloud provider that provide a compatible API with aws EC2, - specify another endpoint like this "https://ec2.another.endpoint..com" + specify another endpoint like this "..com" -- `disable_stop_instance` (boolean) - Packer normally stops the build instance +- `disable_stop_instance` (boolean) - Packer normally stops the build instance after all provisioners have run. For Windows instances, it is sometimes desirable to [run Sysprep](http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ami-create-standard.html) which will stop the instance for you. If this is set to true, Packer *will not* stop the instance and will wait for you to stop it manually. You can do this with a [windows-shell provisioner](https://www.packer.io/docs/provisioners/windows-shell.html). - ```json + ``` json { "type": "windows-shell", "inline": ["\"c:\\Program Files\\Amazon\\Ec2ConfigService\\ec2config.exe\" -sysprep"] } ``` -- `ebs_optimized` (boolean) - Mark instance as [EBS +- `ebs_optimized` (boolean) - Mark instance as [EBS Optimized](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html). Default `false`. -- `enhanced_networking` (boolean) - Enable enhanced - networking (SriovNetSupport and ENA) on HVM-compatible AMIs. If true, add - `ec2:ModifyInstanceAttribute` to your AWS IAM policy. Note: you must make - sure enhanced networking is enabled on your instance. See [Amazon's - documentation on enabling enhanced networking]( - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking) +- `enhanced_networking` (boolean) - Enable enhanced + networking (SriovNetSupport and ENA) on HVM-compatible AMIs. If true, add + `ec2:ModifyInstanceAttribute` to your AWS IAM policy. Note: you must make + sure enhanced networking is enabled on your instance. See [Amazon's + documentation on enabling enhanced networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking) -- `force_deregister` (boolean) - Force Packer to first deregister an existing +- `force_deregister` (boolean) - Force Packer to first deregister an existing AMI if one with the same name already exists. Default `false`. -- `force_delete_snapshot` (boolean) - Force Packer to delete snapshots associated with +- `force_delete_snapshot` (boolean) - Force Packer to delete snapshots associated with AMIs, which have been deregistered by `force_deregister`. Default `false`. -- `encrypt_boot` (boolean) - Instruct packer to automatically create a copy of the +- `encrypt_boot` (boolean) - Instruct packer to automatically create a copy of the AMI with an encrypted boot volume (discarding the initial unencrypted AMI in the process). Default `false`. -- `kms_key_id` (string) - The ID of the KMS key to use for boot volume encryption. +- `kms_key_id` (string) - The ID of the KMS key to use for boot volume encryption. This only applies to the main `region`, other regions where the AMI will be copied will be encrypted by the default EBS KMS key. -- `iam_instance_profile` (string) - The name of an [IAM instance +- `iam_instance_profile` (string) - The name of an [IAM instance profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html) to launch the EC2 instance with. -- `launch_block_device_mappings` (array of block device mappings) - Add one or +- `launch_block_device_mappings` (array of block device mappings) - Add one or more block devices before the packer build starts. These are not necessarily preserved when booting from the AMI built with packer. See `ami_block_device_mappings`, above, for details. -- `mfa_code` (string) - The MFA [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm) - code. This should probably be a user variable since it changes all the time. +- `mfa_code` (string) - The MFA [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm) + code. This should probably be a user variable since it changes all the time. -- `profile` (string) - The profile to use in the shared credentials file for +- `profile` (string) - The profile to use in the shared credentials file for AWS. See Amazon's documentation on [specifying profiles](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-profiles) for more details. -- `region_kms_key_ids` (map of strings) - a map of regions to copy the ami to, +- `region_kms_key_ids` (map of strings) - a map of regions to copy the ami to, along with the custom kms key id to use for encryption for that region. Keys must match the regions provided in `ami_regions`. If you just want to encrypt using a default ID, you can stick with `kms_key_id` and `ami_regions`. @@ -205,55 +204,55 @@ builder. However, you cannot use default key IDs if you are using this in conjunction with `snapshot_users` -- in that situation you must use custom keys. -- `run_tags` (object of key/value strings) - Tags to apply to the instance +- `run_tags` (object of key/value strings) - Tags to apply to the instance that is *launched* to create the AMI. These tags are *not* applied to the resulting AMI unless they're duplicated in `tags`. This is a [template engine](/docs/templates/engine.html) where the `SourceAMI` variable is replaced with the source AMI ID and `BuildRegion` variable is replaced with the value of `region`. -- `run_volume_tags` (object of key/value strings) - Tags to apply to the volumes +- `run_volume_tags` (object of key/value strings) - Tags to apply to the volumes that are *launched* to create the AMI. These tags are *not* applied to the resulting AMI unless they're duplicated in `tags`. This is a [template engine](/docs/templates/engine.html) where the `SourceAMI` variable is replaced with the source AMI ID and `BuildRegion` variable is replaced with the value of `region`. -- `security_group_id` (string) - The ID (*not* the name) of the security group +- `security_group_id` (string) - The ID (*not* the name) of the security group to assign to the instance. By default this is not set and Packer will automatically create a new temporary security group to allow SSH access. Note that if this is specified, you must be sure the security group allows access to the `ssh_port` given below. -- `security_group_ids` (array of strings) - A list of security groups as +- `security_group_ids` (array of strings) - A list of security groups as described above. Note that if this is specified, you must omit the `security_group_id`. -- `shutdown_behavior` (string) - Automatically terminate instances on shutdown +- `shutdown_behavior` (string) - Automatically terminate instances on shutdown incase packer exits ungracefully. Possible values are "stop" and "terminate", default is `stop`. -- `skip_region_validation` (boolean) - Set to true if you want to skip - validation of the region configuration option. Default `false`. +- `skip_region_validation` (boolean) - Set to true if you want to skip + validation of the region configuration option. Default `false`. -- `snapshot_groups` (array of strings) - A list of groups that have access to +- `snapshot_groups` (array of strings) - A list of groups that have access to create volumes from the snapshot(s). By default no groups have permission to create volumes form the snapshot(s). `all` will make the snapshot publicly accessible. -- `snapshot_users` (array of strings) - A list of account IDs that have access to +- `snapshot_users` (array of strings) - A list of account IDs that have access to create volumes from the snapshot(s). By default no additional users other than the user creating the AMI has permissions to create volumes from the backing snapshot(s). -- `snapshot_tags` (object of key/value strings) - Tags to apply to snapshot. - They will override AMI tags if already applied to snapshot. This is a +- `snapshot_tags` (object of key/value strings) - Tags to apply to snapshot. + They will override AMI tags if already applied to snapshot. This is a [template engine](/docs/templates/engine.html) where the `SourceAMI` variable is replaced with the source AMI ID and `BuildRegion` variable is replaced with the value of `region`. -- `source_ami_filter` (object) - Filters used to populate the `source_ami` field. +- `source_ami_filter` (object) - Filters used to populate the `source_ami` field. Example: - ```json + ``` json { "source_ami_filter": { "filters": { @@ -271,18 +270,18 @@ builder. NOTE: This will fail unless *exactly* one AMI is returned. In the above example, `most_recent` will cause this to succeed by selecting the newest image. - - `filters` (map of strings) - filters used to select a `source_ami`. - NOTE: This will fail unless *exactly* one AMI is returned. - Any filter described in the docs for [DescribeImages](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeImages.html) - is valid. + - `filters` (map of strings) - filters used to select a `source_ami`. + NOTE: This will fail unless *exactly* one AMI is returned. + Any filter described in the docs for [DescribeImages](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeImages.html) + is valid. - - `owners` (array of strings) - This scopes the AMIs to certain Amazon account IDs. - This is helpful to limit the AMIs to a trusted third party, or to your own account. + - `owners` (array of strings) - This scopes the AMIs to certain Amazon account IDs. + This is helpful to limit the AMIs to a trusted third party, or to your own account. - - `most_recent` (bool) - Selects the newest created image when true. - This is most useful for selecting a daily distro build. + - `most_recent` (bool) - Selects the newest created image when true. + This is most useful for selecting a daily distro build. -- `spot_price` (string) - The maximum hourly price to pay for a spot instance +- `spot_price` (string) - The maximum hourly price to pay for a spot instance to create the AMI. Spot instances are a type of instance that EC2 starts when the current spot price is less than the maximum price you specify. Spot price will be updated based on available spot instance capacity and current @@ -290,20 +289,20 @@ builder. `auto` for Packer to automatically discover the best spot price or to "0" to use an on demand instance (default). -- `spot_price_auto_product` (string) - Required if `spot_price` is set +- `spot_price_auto_product` (string) - Required if `spot_price` is set to `auto`. This tells Packer what sort of AMI you're launching to find the best spot price. This must be one of: `Linux/UNIX`, `SUSE Linux`, `Windows`, `Linux/UNIX (Amazon VPC)`, `SUSE Linux (Amazon VPC)`, `Windows (Amazon VPC)` -- `ssh_keypair_name` (string) - If specified, this is the key that will be +- `ssh_keypair_name` (string) - If specified, this is the key that will be used for SSH with the machine. The key must match a key pair name loaded - up into Amazon EC2. By default, this is blank, and Packer will + up into Amazon EC2. By default, this is blank, and Packer will generate a temporary keypair unless [`ssh_password`](/docs/templates/communicator.html#ssh_password) is used. [`ssh_private_key_file`](/docs/templates/communicator.html#ssh_private_key_file) or `ssh_agent_auth` must be specified when `ssh_keypair_name` is utilized. -- `ssh_agent_auth` (boolean) - If true, the local SSH agent will be used to +- `ssh_agent_auth` (boolean) - If true, the local SSH agent will be used to authenticate connections to the source instance. No temporary keypair will be created, and the values of `ssh_password` and `ssh_private_key_file` will be ignored. To use this option with a key pair already configured in the source @@ -311,45 +310,45 @@ builder. in AWS with the source instance, set the `ssh_keypair_name` field to the name of the key pair. -- `ssh_private_ip` (boolean) - If true, then SSH will always use the private +- `ssh_private_ip` (boolean) - If true, then SSH will always use the private IP if available. -- `subnet_id` (string) - If using VPC, the ID of the subnet, such as +- `subnet_id` (string) - If using VPC, the ID of the subnet, such as `subnet-12345def`, where Packer will launch the EC2 instance. This field is required if you are using an non-default VPC. -- `tags` (object of key/value strings) - Tags applied to the AMI and +- `tags` (object of key/value strings) - Tags applied to the AMI and relevant snapshots. This is a [template engine](/docs/templates/engine.html) where the `SourceAMI` variable is replaced with the source AMI ID and `BuildRegion` variable is replaced with the value of `region`. -- `temporary_key_pair_name` (string) - The name of the temporary keypair +- `temporary_key_pair_name` (string) - The name of the temporary keypair to generate. By default, Packer generates a name with a UUID. -- `token` (string) - The access token to use. This is different from the +- `token` (string) - The access token to use. This is different from the access key and secret key. If you're not sure what this is, then you probably don't need it. This will also be read from the `AWS_SESSION_TOKEN` environmental variable. -- `user_data` (string) - User data to apply when launching the instance. Note +- `user_data` (string) - User data to apply when launching the instance. Note that you need to be careful about escaping characters due to the templates being JSON. It is often more convenient to use `user_data_file`, instead. -- `user_data_file` (string) - Path to a file that will be used for the user +- `user_data_file` (string) - Path to a file that will be used for the user data when launching the instance. -- `vpc_id` (string) - If launching into a VPC subnet, Packer needs the VPC ID - in order to create a temporary security group within the VPC. Requires `subnet_id` - to be set. If this field is left blank, Packer will try to get the VPC ID from the - `subnet_id`. +- `vpc_id` (string) - If launching into a VPC subnet, Packer needs the VPC ID + in order to create a temporary security group within the VPC. Requires `subnet_id` + to be set. If this field is left blank, Packer will try to get the VPC ID from the + `subnet_id`. -- `windows_password_timeout` (string) - The timeout for waiting for a Windows +- `windows_password_timeout` (string) - The timeout for waiting for a Windows password for Windows instances. Defaults to 20 minutes. Example value: `10m` ## Basic Example -```json +``` json { "type" : "amazon-ebssurrogate", "secret_key" : "YOUR SECRET KEY HERE", @@ -376,7 +375,7 @@ builder. } ``` --> **Note:** Packer can also read the access key and secret access key from +-> **Note:** Packer can also read the access key and secret access key from environmental variables. See the configuration reference in the section above for more information on what environmental variables Packer will look for. @@ -392,7 +391,7 @@ with the `-debug` flag. In debug mode, the Amazon builder will save the private key in the current directory and will output the DNS or IP information as well. You can use this information to access the instance as it is running. --> **Note:** Packer uses pre-built AMIs as the source for building images. +-> **Note:** Packer uses pre-built AMIs as the source for building images. These source AMIs may include volumes that are not flagged to be destroyed on termination of the instance building the new image. In addition to those volumes created by this builder, any volumes inn the source AMI which are not marked for diff --git a/website/source/docs/builders/amazon-ebsvolume.html.md b/website/source/docs/builders/amazon-ebsvolume.html.md index d262e04f8..c27c54148 100644 --- a/website/source/docs/builders/amazon-ebsvolume.html.md +++ b/website/source/docs/builders/amazon-ebsvolume.html.md @@ -1,10 +1,10 @@ --- +description: | + The amazon-ebsvolume Packer builder is like the EBS builder, but is intended + to create EBS volumes rather than a machine image. layout: docs -sidebar_current: docs-builders-amazon-ebsvolume -page_title: Amazon EBS Volume - Builders -description: |- - The amazon-ebsvolume Packer builder is like the EBS builder, but is intended - to create EBS volumes rather than a machine image. +page_title: 'Amazon EBS Volume - Builders' +sidebar_current: 'docs-builders-amazon-ebsvolume' --- # EBS Volume Builder @@ -25,7 +25,7 @@ instance while the image is being created. The builder does *not* manage EBS Volumes. Once it creates volumes and stores it in your account, it is up to you to use, delete, etc. the volumes. --> **Note:** Temporary resources are, by default, all created with the prefix +-> **Note:** Temporary resources are, by default, all created with the prefix `packer`. This can be useful if you want to restrict the security groups and key pairs Packer is able to operate on. @@ -41,89 +41,88 @@ builder. ### Required: -- `access_key` (string) - The access key used to communicate with AWS. [Learn +- `access_key` (string) - The access key used to communicate with AWS. [Learn how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials) -- `instance_type` (string) - The EC2 instance type to use while building the +- `instance_type` (string) - The EC2 instance type to use while building the AMI, such as `m1.small`. -- `region` (string) - The name of the region, such as `us-east-1`, in which to +- `region` (string) - The name of the region, such as `us-east-1`, in which to launch the EC2 instance to create the AMI. -- `secret_key` (string) - The secret key used to communicate with AWS. [Learn +- `secret_key` (string) - The secret key used to communicate with AWS. [Learn how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials) -- `source_ami` (string) - The initial AMI used as a base for the newly +- `source_ami` (string) - The initial AMI used as a base for the newly created machine. `source_ami_filter` may be used instead to populate this automatically. ### Optional: -- `ebs_volumes` (array of block device mappings) - Add the block +- `ebs_volumes` (array of block device mappings) - Add the block device mappings to the AMI. The block device mappings allow for keys: - - `device_name` (string) - The device name exposed to the instance (for - example, `/dev/sdh` or `xvdh`). Required when specifying `volume_size`. - - `delete_on_termination` (boolean) - Indicates whether the EBS volume is + - `device_name` (string) - The device name exposed to the instance (for + example, `/dev/sdh` or `xvdh`). Required when specifying `volume_size`. + - `delete_on_termination` (boolean) - Indicates whether the EBS volume is deleted on instance termination - - `encrypted` (boolean) - Indicates whether to encrypt the volume or not - - `iops` (integer) - The number of I/O operations per second (IOPS) that the + - `encrypted` (boolean) - Indicates whether to encrypt the volume or not + - `iops` (integer) - The number of I/O operations per second (IOPS) that the volume supports. See the documentation on [IOPs](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html) for more information - - `no_device` (boolean) - Suppresses the specified device included in the + - `no_device` (boolean) - Suppresses the specified device included in the block device mapping of the AMI - - `snapshot_id` (string) - The ID of the snapshot - - `virtual_name` (string) - The virtual device name. See the documentation on + - `snapshot_id` (string) - The ID of the snapshot + - `virtual_name` (string) - The virtual device name. See the documentation on [Block Device Mapping](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html) for more information - - `volume_size` (integer) - The size of the volume, in GiB. Required if not + - `volume_size` (integer) - The size of the volume, in GiB. Required if not specifying a `snapshot_id` - - `volume_type` (string) - The volume type. `gp2` for General Purpose (SSD) + - `volume_type` (string) - The volume type. `gp2` for General Purpose (SSD) volumes, `io1` for Provisioned IOPS (SSD) volumes, and `standard` for Magnetic volumes - - `tags` (map) - Tags to apply to the volume. These are retained after the - builder completes. This is a [template engine] + - `tags` (map) - Tags to apply to the volume. These are retained after the + builder completes. This is a \[template engine\] (/docs/templates/engine.html) where the `SourceAMI` variable is replaced with the source AMI ID and `BuildRegion` variable is replaced with the value of `region`. -- `associate_public_ip_address` (boolean) - If using a non-default VPC, public +- `associate_public_ip_address` (boolean) - If using a non-default VPC, public IP addresses are not provided by default. If this is toggled, your new instance will get a Public IP. -- `availability_zone` (string) - Destination availability zone to launch +- `availability_zone` (string) - Destination availability zone to launch instance in. Leave this empty to allow Amazon to auto-assign. -- `custom_endpoint_ec2` (string) - this option is useful if you use +- `custom_endpoint_ec2` (string) - this option is useful if you use another cloud provider that provide a compatible API with aws EC2, - specify another endpoint like this "https://ec2.another.endpoint..com" + specify another endpoint like this "..com" -- `ebs_optimized` (boolean) - Mark instance as [EBS +- `ebs_optimized` (boolean) - Mark instance as [EBS Optimized](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html). Default `false`. -- `enhanced_networking` (boolean) - Enable enhanced - networking (SriovNetSupport and ENA) on HVM-compatible AMIs. If true, add - `ec2:ModifyInstanceAttribute` to your AWS IAM policy. Note: you must make - sure enhanced networking is enabled on your instance. See [Amazon's - documentation on enabling enhanced networking]( - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking) +- `enhanced_networking` (boolean) - Enable enhanced + networking (SriovNetSupport and ENA) on HVM-compatible AMIs. If true, add + `ec2:ModifyInstanceAttribute` to your AWS IAM policy. Note: you must make + sure enhanced networking is enabled on your instance. See [Amazon's + documentation on enabling enhanced networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking) -- `iam_instance_profile` (string) - The name of an [IAM instance +- `iam_instance_profile` (string) - The name of an [IAM instance profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html) to launch the EC2 instance with. -- `mfa_code` (string) - The MFA [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm) - code. This should probably be a user variable since it changes all the time. +- `mfa_code` (string) - The MFA [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm) + code. This should probably be a user variable since it changes all the time. -- `profile` (string) - The profile to use in the shared credentials file for +- `profile` (string) - The profile to use in the shared credentials file for AWS. See Amazon's documentation on [specifying profiles](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-profiles) for more details. -- `region_kms_key_ids` (map of strings) - a map of regions to copy the ami to, +- `region_kms_key_ids` (map of strings) - a map of regions to copy the ami to, along with the custom kms key id to use for encryption for that region. Keys must match the regions provided in `ami_regions`. If you just want to encrypt using a default ID, you can stick with `kms_key_id` and `ami_regions`. @@ -132,42 +131,42 @@ builder. However, you cannot use default key IDs if you are using this in conjunction with `snapshot_users` -- in that situation you must use custom keys. -- `run_tags` (object of key/value strings) - Tags to apply to the instance +- `run_tags` (object of key/value strings) - Tags to apply to the instance that is *launched* to create the AMI. These tags are *not* applied to the resulting AMI unless they're duplicated in `tags`. This is a [template engine](/docs/templates/engine.html) where the `SourceAMI` variable is replaced with the source AMI ID and `BuildRegion` variable is replaced with the value of `region`. -- `security_group_id` (string) - The ID (*not* the name) of the security group +- `security_group_id` (string) - The ID (*not* the name) of the security group to assign to the instance. By default this is not set and Packer will automatically create a new temporary security group to allow SSH access. Note that if this is specified, you must be sure the security group allows access to the `ssh_port` given below. -- `security_group_ids` (array of strings) - A list of security groups as +- `security_group_ids` (array of strings) - A list of security groups as described above. Note that if this is specified, you must omit the `security_group_id`. -- `shutdown_behavior` (string) - Automatically terminate instances on shutdown +- `shutdown_behavior` (string) - Automatically terminate instances on shutdown in case Packer exits ungracefully. Possible values are `stop` and `terminate`. Defaults to `stop`. -- `skip_region_validation` (boolean) - Set to `true` if you want to skip - validation of the region configuration option. Defaults to `false`. +- `skip_region_validation` (boolean) - Set to `true` if you want to skip + validation of the region configuration option. Defaults to `false`. -- `snapshot_groups` (array of strings) - A list of groups that have access to +- `snapshot_groups` (array of strings) - A list of groups that have access to create volumes from the snapshot(s). By default no groups have permission to create volumes form the snapshot(s). `all` will make the snapshot publicly accessible. -- `snapshot_users` (array of strings) - A list of account IDs that have access to +- `snapshot_users` (array of strings) - A list of account IDs that have access to create volumes from the snapshot(s). By default no additional users other than the user creating the AMI has permissions to create volumes from the backing snapshot(s). -- `source_ami_filter` (object) - Filters used to populate the `source_ami` field. +- `source_ami_filter` (object) - Filters used to populate the `source_ami` field. Example: - ```json + ``` json { "source_ami_filter": { "filters": { @@ -185,18 +184,18 @@ builder. NOTE: This will fail unless *exactly* one AMI is returned. In the above example, `most_recent` will cause this to succeed by selecting the newest image. - - `filters` (map of strings) - filters used to select a `source_ami`. - NOTE: This will fail unless *exactly* one AMI is returned. - Any filter described in the docs for [DescribeImages](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeImages.html) - is valid. + - `filters` (map of strings) - filters used to select a `source_ami`. + NOTE: This will fail unless *exactly* one AMI is returned. + Any filter described in the docs for [DescribeImages](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeImages.html) + is valid. - - `owners` (array of strings) - This scopes the AMIs to certain Amazon account IDs. - This is helpful to limit the AMIs to a trusted third party, or to your own account. + - `owners` (array of strings) - This scopes the AMIs to certain Amazon account IDs. + This is helpful to limit the AMIs to a trusted third party, or to your own account. - - `most_recent` (bool) - Selects the newest created image when true. - This is most useful for selecting a daily distro build. + - `most_recent` (bool) - Selects the newest created image when true. + This is most useful for selecting a daily distro build. -- `spot_price` (string) - The maximum hourly price to pay for a spot instance +- `spot_price` (string) - The maximum hourly price to pay for a spot instance to create the AMI. Spot instances are a type of instance that EC2 starts when the current spot price is less than the maximum price you specify. Spot price will be updated based on available spot instance capacity and current @@ -204,53 +203,52 @@ builder. `auto` for Packer to automatically discover the best spot price or to `0` to use an on-demand instance (default). -- `spot_price_auto_product` (string) - Required if `spot_price` is set +- `spot_price_auto_product` (string) - Required if `spot_price` is set to `auto`. This tells Packer what sort of AMI you're launching to find the best spot price. This must be one of: `Linux/UNIX`, `SUSE Linux`, `Windows`, `Linux/UNIX (Amazon VPC)`, `SUSE Linux (Amazon VPC)` or `Windows (Amazon VPC)` -- `ssh_keypair_name` (string) - If specified, this is the key that will be +- `ssh_keypair_name` (string) - If specified, this is the key that will be used for SSH with the machine. By default, this is blank, and Packer will generate a temporary key pair unless [`ssh_password`](/docs/templates/communicator.html#ssh_password) is used. [`ssh_private_key_file`](/docs/templates/communicator.html#ssh_private_key_file) must be specified with this. -- `ssh_private_ip` (boolean) - If `true`, then SSH will always use the private +- `ssh_private_ip` (boolean) - If `true`, then SSH will always use the private IP if available. Also works for WinRM. -- `subnet_id` (string) - If using VPC, the ID of the subnet, such as +- `subnet_id` (string) - If using VPC, the ID of the subnet, such as `subnet-12345def`, where Packer will launch the EC2 instance. This field is required if you are using an non-default VPC. -- `temporary_key_pair_name` (string) - The name of the temporary key pair +- `temporary_key_pair_name` (string) - The name of the temporary key pair to generate. By default, Packer generates a name that looks like - `packer_`, where \ is a 36 character unique identifier. + `packer_`, where <UUID> is a 36 character unique identifier. -- `token` (string) - The access token to use. This is different from the +- `token` (string) - The access token to use. This is different from the access key and secret key. If you're not sure what this is, then you probably don't need it. This will also be read from the `AWS_SESSION_TOKEN` environmental variable. -- `user_data` (string) - User data to apply when launching the instance. Note +- `user_data` (string) - User data to apply when launching the instance. Note that you need to be careful about escaping characters due to the templates being JSON. It is often more convenient to use `user_data_file`, instead. -- `user_data_file` (string) - Path to a file that will be used for the user +- `user_data_file` (string) - Path to a file that will be used for the user data when launching the instance. -- `vpc_id` (string) - If launching into a VPC subnet, Packer needs the VPC ID - in order to create a temporary security group within the VPC. Requires `subnet_id` - to be set. If this field is left blank, Packer will try to get the VPC ID from the - `subnet_id`. +- `vpc_id` (string) - If launching into a VPC subnet, Packer needs the VPC ID + in order to create a temporary security group within the VPC. Requires `subnet_id` + to be set. If this field is left blank, Packer will try to get the VPC ID from the + `subnet_id`. -- `windows_password_timeout` (string) - The timeout for waiting for a Windows +- `windows_password_timeout` (string) - The timeout for waiting for a Windows password for Windows instances. Defaults to 20 minutes. Example value: `10m` - ## Basic Example -```json +``` json { "type" : "amazon-ebsvolume", "secret_key" : "YOUR SECRET KEY HERE", @@ -294,7 +292,7 @@ builder. } ``` --> **Note:** Packer can also read the access key and secret access key from +-> **Note:** Packer can also read the access key and secret access key from environmental variables. See the configuration reference in the section above for more information on what environmental variables Packer will look for. @@ -310,7 +308,7 @@ with the `-debug` flag. In debug mode, the Amazon builder will save the private key in the current directory and will output the DNS or IP information as well. You can use this information to access the instance as it is running. --> **Note:** Packer uses pre-built AMIs as the source for building images. +-> **Note:** Packer uses pre-built AMIs as the source for building images. These source AMIs may include volumes that are not flagged to be destroyed on termination of the instance building the new image. In addition to those volumes created by this builder, any volumes inn the source AMI which are not marked for diff --git a/website/source/docs/builders/amazon-instance.html.md b/website/source/docs/builders/amazon-instance.html.md index aeeff7b89..c63e537a5 100644 --- a/website/source/docs/builders/amazon-instance.html.md +++ b/website/source/docs/builders/amazon-instance.html.md @@ -1,12 +1,12 @@ --- +description: | + The amazon-instance Packer builder is able to create Amazon AMIs backed by + instance storage as the root device. For more information on the difference + between instance storage and EBS-backed instances, see the storage for the + root device section in the EC2 documentation. layout: docs -sidebar_current: docs-builders-amazon-instance -page_title: Amazon instance-store - Builders -description: |- - The amazon-instance Packer builder is able to create Amazon AMIs backed by - instance storage as the root device. For more information on the difference - between instance storage and EBS-backed instances, see the storage for the - root device section in the EC2 documentation. +page_title: 'Amazon instance-store - Builders' +sidebar_current: 'docs-builders-amazon-instance' --- # AMI Builder (instance-store) @@ -29,16 +29,16 @@ created. This simplifies configuration quite a bit. The builder does *not* manage AMIs. Once it creates an AMI and stores it in your account, it is up to you to use, delete, etc. the AMI. --> **Note:** Temporary resources are, by default, all created with the prefix +-> **Note:** Temporary resources are, by default, all created with the prefix `packer`. This can be useful if you want to restrict the security groups and key pairs packer is able to operate on. --> **Note:** This builder requires that the [Amazon EC2 AMI +-> **Note:** This builder requires that the [Amazon EC2 AMI Tools](https://aws.amazon.com/developertools/368) are installed onto the machine. This can be done within a provisioner, but must be done before the builder finishes running. -~> Instance builds are not supported for Windows. Use [`amazon-ebs`](amazon-ebs.html) instead. +~> Instance builds are not supported for Windows. Use [`amazon-ebs`](amazon-ebs.html) instead. ## Configuration Reference @@ -52,45 +52,45 @@ builder. ### Required: -- `access_key` (string) - The access key used to communicate with AWS. [Learn +- `access_key` (string) - The access key used to communicate with AWS. [Learn how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials) -- `account_id` (string) - Your AWS account ID. This is required for bundling +- `account_id` (string) - Your AWS account ID. This is required for bundling the AMI. This is *not the same* as the access key. You can find your account ID in the security credentials page of your AWS account. -- `ami_name` (string) - The name of the resulting AMI that will appear when +- `ami_name` (string) - The name of the resulting AMI that will appear when managing AMIs in the AWS console or via APIs. This must be unique. To help make this unique, use a function like `timestamp` (see [configuration templates](/docs/templates/engine.html) for more info) -- `instance_type` (string) - The EC2 instance type to use while building the +- `instance_type` (string) - The EC2 instance type to use while building the AMI, such as `m1.small`. -- `region` (string) - The name of the region, such as `us-east-1`, in which to +- `region` (string) - The name of the region, such as `us-east-1`, in which to launch the EC2 instance to create the AMI. -- `s3_bucket` (string) - The name of the S3 bucket to upload the AMI. This +- `s3_bucket` (string) - The name of the S3 bucket to upload the AMI. This bucket will be created if it doesn't exist. -- `secret_key` (string) - The secret key used to communicate with AWS. [Learn +- `secret_key` (string) - The secret key used to communicate with AWS. [Learn how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials) -- `source_ami` (string) - The initial AMI used as a base for the newly +- `source_ami` (string) - The initial AMI used as a base for the newly created machine. -- `x509_cert_path` (string) - The local path to a valid X509 certificate for +- `x509_cert_path` (string) - The local path to a valid X509 certificate for your AWS account. This is used for bundling the AMI. This X509 certificate must be registered with your account from the security credentials page in the AWS console. -- `x509_key_path` (string) - The local path to the private key for the X509 +- `x509_key_path` (string) - The local path to the private key for the X509 certificate specified by `x509_cert_path`. This is used for bundling the AMI. ### Optional: -- `ami_block_device_mappings` (array of block device mappings) - Add one or +- `ami_block_device_mappings` (array of block device mappings) - Add one or more [block device mappings](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html) to the AMI. These will be attached when booting a new instance from your AMI. To add a block device during the Packer build see @@ -98,128 +98,127 @@ builder. on the type of VM you use. The block device mappings allow for the following configuration: - - `delete_on_termination` (boolean) - Indicates whether the EBS volume is + - `delete_on_termination` (boolean) - Indicates whether the EBS volume is deleted on instance termination. Default `false`. **NOTE**: If this value is not explicitly set to `true` and volumes are not cleaned up by an alternative method, additional volumes will accumulate after every build. - - `device_name` (string) - The device name exposed to the instance (for - example, `/dev/sdh` or `xvdh`). Required when specifying `volume_size`. + - `device_name` (string) - The device name exposed to the instance (for + example, `/dev/sdh` or `xvdh`). Required when specifying `volume_size`. - - `encrypted` (boolean) - Indicates whether to encrypt the volume or not + - `encrypted` (boolean) - Indicates whether to encrypt the volume or not - - `iops` (integer) - The number of I/O operations per second (IOPS) that the + - `iops` (integer) - The number of I/O operations per second (IOPS) that the volume supports. See the documentation on [IOPs](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html) for more information - - `no_device` (boolean) - Suppresses the specified device included in the + - `no_device` (boolean) - Suppresses the specified device included in the block device mapping of the AMI - - `snapshot_id` (string) - The ID of the snapshot + - `snapshot_id` (string) - The ID of the snapshot - - `virtual_name` (string) - The virtual device name. See the documentation on + - `virtual_name` (string) - The virtual device name. See the documentation on [Block Device Mapping](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html) for more information - - `volume_size` (integer) - The size of the volume, in GiB. Required if not + - `volume_size` (integer) - The size of the volume, in GiB. Required if not specifying a `snapshot_id` - - `volume_type` (string) - The volume type. `gp2` for General Purpose (SSD) + - `volume_type` (string) - The volume type. `gp2` for General Purpose (SSD) volumes, `io1` for Provisioned IOPS (SSD) volumes, and `standard` for Magnetic volumes -- `ami_description` (string) - The description to set for the +- `ami_description` (string) - The description to set for the resulting AMI(s). By default this description is empty. This is a [template engine](/docs/templates/engine.html) where the `SourceAMI` variable is replaced with the source AMI ID and `BuildRegion` variable is replaced with the value of `region`. -- `ami_groups` (array of strings) - A list of groups that have access to +- `ami_groups` (array of strings) - A list of groups that have access to launch the resulting AMI(s). By default no groups have permission to launch the AMI. `all` will make the AMI publicly accessible. AWS currently doesn't accept any value other than `all`. -- `ami_product_codes` (array of strings) - A list of product codes to +- `ami_product_codes` (array of strings) - A list of product codes to associate with the AMI. By default no product codes are associated with the AMI. -- `ami_regions` (array of strings) - A list of regions to copy the AMI to. +- `ami_regions` (array of strings) - A list of regions to copy the AMI to. Tags and attributes are copied along with the AMI. AMI copying takes time depending on the size of the AMI, but will generally take many minutes. -- `ami_users` (array of strings) - A list of account IDs that have access to +- `ami_users` (array of strings) - A list of account IDs that have access to launch the resulting AMI(s). By default no additional users other than the user creating the AMI has permissions to launch it. -- `ami_virtualization_type` (string) - The type of virtualization for the AMI +- `ami_virtualization_type` (string) - The type of virtualization for the AMI you are building. This option is required to register HVM images. Can be `paravirtual` (default) or `hvm`. -- `associate_public_ip_address` (boolean) - If using a non-default VPC, public +- `associate_public_ip_address` (boolean) - If using a non-default VPC, public IP addresses are not provided by default. If this is toggled, your new instance will get a Public IP. -- `availability_zone` (string) - Destination availability zone to launch +- `availability_zone` (string) - Destination availability zone to launch instance in. Leave this empty to allow Amazon to auto-assign. -- `bundle_destination` (string) - The directory on the running instance where +- `bundle_destination` (string) - The directory on the running instance where the bundled AMI will be saved prior to uploading. By default this is `/tmp`. This directory must exist and be writable. -- `bundle_prefix` (string) - The prefix for files created from bundling the +- `bundle_prefix` (string) - The prefix for files created from bundling the root volume. By default this is `image-{{timestamp}}`. The `timestamp` variable should be used to make sure this is unique, otherwise it can collide with other created AMIs by Packer in your account. -- `bundle_upload_command` (string) - The command to use to upload the +- `bundle_upload_command` (string) - The command to use to upload the bundled volume. See the "custom bundle commands" section below for more information. -- `bundle_vol_command` (string) - The command to use to bundle the volume. See +- `bundle_vol_command` (string) - The command to use to bundle the volume. See the "custom bundle commands" section below for more information. -- `custom_endpoint_ec2` (string) - this option is useful if you use +- `custom_endpoint_ec2` (string) - this option is useful if you use another cloud provider that provide a compatible API with aws EC2, - specify another endpoint like this "https://ec2.another.endpoint..com" + specify another endpoint like this "..com" -- `ebs_optimized` (boolean) - Mark instance as [EBS +- `ebs_optimized` (boolean) - Mark instance as [EBS Optimized](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html). Default `false`. -- `enhanced_networking` (boolean) - Enable enhanced - networking (SriovNetSupport and ENA) on HVM-compatible AMIs. If true, add - `ec2:ModifyInstanceAttribute` to your AWS IAM policy. Note: you must make - sure enhanced networking is enabled on your instance. See [Amazon's - documentation on enabling enhanced networking]( - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking) +- `enhanced_networking` (boolean) - Enable enhanced + networking (SriovNetSupport and ENA) on HVM-compatible AMIs. If true, add + `ec2:ModifyInstanceAttribute` to your AWS IAM policy. Note: you must make + sure enhanced networking is enabled on your instance. See [Amazon's + documentation on enabling enhanced networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking) -- `force_deregister` (boolean) - Force Packer to first deregister an existing +- `force_deregister` (boolean) - Force Packer to first deregister an existing AMI if one with the same name already exists. Defaults to `false`. -- `force_delete_snapshot` (boolean) - Force Packer to delete snapshots associated with +- `force_delete_snapshot` (boolean) - Force Packer to delete snapshots associated with AMIs, which have been deregistered by `force_deregister`. Defaults to `false`. -- `iam_instance_profile` (string) - The name of an [IAM instance +- `iam_instance_profile` (string) - The name of an [IAM instance profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html) to launch the EC2 instance with. -- `launch_block_device_mappings` (array of block device mappings) - Add one or +- `launch_block_device_mappings` (array of block device mappings) - Add one or more block devices before the Packer build starts. These are not necessarily preserved when booting from the AMI built with Packer. See `ami_block_device_mappings`, above, for details. -- `mfa_code` (string) - The MFA [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm) - code. This should probably be a user variable since it changes all the time. +- `mfa_code` (string) - The MFA [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm) + code. This should probably be a user variable since it changes all the time. -- `profile` (string) - The profile to use in the shared credentials file for +- `profile` (string) - The profile to use in the shared credentials file for AWS. See Amazon's documentation on [specifying profiles](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-profiles) for more details. -- `region_kms_key_ids` (map of strings) - a map of regions to copy the ami to, +- `region_kms_key_ids` (map of strings) - a map of regions to copy the ami to, along with the custom kms key id to use for encryption for that region. Keys must match the regions provided in `ami_regions`. If you just want to encrypt using a default ID, you can stick with `kms_key_id` and `ami_regions`. @@ -228,38 +227,38 @@ builder. However, you cannot use default key IDs if you are using this in conjunction with `snapshot_users` -- in that situation you must use custom keys. -- `run_tags` (object of key/value strings) - Tags to apply to the instance +- `run_tags` (object of key/value strings) - Tags to apply to the instance that is *launched* to create the AMI. These tags are *not* applied to the resulting AMI unless they're duplicated in `tags`. This is a [template engine](/docs/templates/engine.html) where the `SourceAMI` variable is replaced with the source AMI ID and `BuildRegion` variable is replaced with the value of `region`. -- `security_group_id` (string) - The ID (*not* the name) of the security group +- `security_group_id` (string) - The ID (*not* the name) of the security group to assign to the instance. By default this is not set and Packer will automatically create a new temporary security group to allow SSH access. Note that if this is specified, you must be sure the security group allows access to the `ssh_port` given below. -- `security_group_ids` (array of strings) - A list of security groups as +- `security_group_ids` (array of strings) - A list of security groups as described above. Note that if this is specified, you must omit the `security_group_id`. -- `skip_region_validation` (boolean) - Set to true if you want to skip - validation of the region configuration option. Defaults to `false`. +- `skip_region_validation` (boolean) - Set to true if you want to skip + validation of the region configuration option. Defaults to `false`. -- `snapshot_groups` (array of strings) - A list of groups that have access to +- `snapshot_groups` (array of strings) - A list of groups that have access to create volumes from the snapshot(s). By default no groups have permission to create volumes form the snapshot(s). `all` will make the snapshot publicly accessible. -- `snapshot_users` (array of strings) - A list of account IDs that have access to +- `snapshot_users` (array of strings) - A list of account IDs that have access to create volumes from the snapshot(s). By default no additional users other than the user creating the AMI has permissions to create volumes from the backing snapshot(s). -- `source_ami_filter` (object) - Filters used to populate the `source_ami` field. +- `source_ami_filter` (object) - Filters used to populate the `source_ami` field. Example: - ```json + ``` json { "source_ami_filter": { "filters": { @@ -277,21 +276,21 @@ builder. NOTE: This will fail unless *exactly* one AMI is returned. In the above example, `most_recent` will cause this to succeed by selecting the newest image. - - `filters` (map of strings) - filters used to select a `source_ami`. - NOTE: This will fail unless *exactly* one AMI is returned. - Any filter described in the docs for [DescribeImages](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeImages.html) - is valid. + - `filters` (map of strings) - filters used to select a `source_ami`. + NOTE: This will fail unless *exactly* one AMI is returned. + Any filter described in the docs for [DescribeImages](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeImages.html) + is valid. - - `owners` (array of strings) - This scopes the AMIs to certain Amazon account IDs. - This is helpful to limit the AMIs to a trusted third party, or to your own account. + - `owners` (array of strings) - This scopes the AMIs to certain Amazon account IDs. + This is helpful to limit the AMIs to a trusted third party, or to your own account. - - `most_recent` (bool) - Selects the newest created image when true. - This is most useful for selecting a daily distro build. + - `most_recent` (bool) - Selects the newest created image when true. + This is most useful for selecting a daily distro build. -- `snapshot_tags` (object of key/value strings) - Tags to apply to snapshot. - They will override AMI tags if already applied to snapshot. +- `snapshot_tags` (object of key/value strings) - Tags to apply to snapshot. + They will override AMI tags if already applied to snapshot. -- `spot_price` (string) - The maximum hourly price to launch a spot instance +- `spot_price` (string) - The maximum hourly price to launch a spot instance to create the AMI. It is a type of instances that EC2 starts when the maximum price that you specify exceeds the current spot price. Spot price will be updated based on available spot instance capacity and current spot @@ -299,20 +298,20 @@ builder. for Packer to automatically discover the best spot price or to `0` to use an on-demand instance (default). -- `spot_price_auto_product` (string) - Required if `spot_price` is set +- `spot_price_auto_product` (string) - Required if `spot_price` is set to `auto`. This tells Packer what sort of AMI you're launching to find the best spot price. This must be one of: `Linux/UNIX`, `SUSE Linux`, `Windows`, `Linux/UNIX (Amazon VPC)`, `SUSE Linux (Amazon VPC)`, `Windows (Amazon VPC)` -- `ssh_keypair_name` (string) - If specified, this is the key that will be +- `ssh_keypair_name` (string) - If specified, this is the key that will be used for SSH with the machine. The key must match a key pair name loaded - up into Amazon EC2. By default, this is blank, and Packer will + up into Amazon EC2. By default, this is blank, and Packer will generate a temporary key pair unless [`ssh_password`](/docs/templates/communicator.html#ssh_password) is used. [`ssh_private_key_file`](/docs/templates/communicator.html#ssh_private_key_file) or `ssh_agent_auth` must be specified when `ssh_keypair_name` is utilized. -- `ssh_agent_auth` (boolean) - If true, the local SSH agent will be used to +- `ssh_agent_auth` (boolean) - If true, the local SSH agent will be used to authenticate connections to the source instance. No temporary key pair will be created, and the values of `ssh_password` and `ssh_private_key_file` will be ignored. To use this option with a key pair already configured in the source @@ -320,48 +319,48 @@ builder. in AWS with the source instance, set the `ssh_keypair_name` field to the name of the key pair. -- `ssh_private_ip` (boolean) - If true, then SSH will always use the private +- `ssh_private_ip` (boolean) - If true, then SSH will always use the private IP if available. Also works for WinRM. -- `subnet_id` (string) - If using VPC, the ID of the subnet, such as +- `subnet_id` (string) - If using VPC, the ID of the subnet, such as `subnet-12345def`, where Packer will launch the EC2 instance. This field is required if you are using an non-default VPC. -- `tags` (object of key/value strings) - Tags applied to the AMI. This is a +- `tags` (object of key/value strings) - Tags applied to the AMI. This is a [template engine](/docs/templates/engine.html) where the `SourceAMI` variable is replaced with the source AMI ID and `BuildRegion` variable is replaced with the value of `region`. -- `temporary_key_pair_name` (string) - The name of the temporary key pair +- `temporary_key_pair_name` (string) - The name of the temporary key pair to generate. By default, Packer generates a name that looks like - `packer_`, where \ is a 36 character unique identifier. + `packer_`, where <UUID> is a 36 character unique identifier. -- `user_data` (string) - User data to apply when launching the instance. Note +- `user_data` (string) - User data to apply when launching the instance. Note that you need to be careful about escaping characters due to the templates being JSON. It is often more convenient to use `user_data_file`, instead. -- `user_data_file` (string) - Path to a file that will be used for the user +- `user_data_file` (string) - Path to a file that will be used for the user data when launching the instance. -- `vpc_id` (string) - If launching into a VPC subnet, Packer needs the VPC ID - in order to create a temporary security group within the VPC. Requires `subnet_id` - to be set. If this field is left blank, Packer will try to get the VPC ID from the - `subnet_id`. +- `vpc_id` (string) - If launching into a VPC subnet, Packer needs the VPC ID + in order to create a temporary security group within the VPC. Requires `subnet_id` + to be set. If this field is left blank, Packer will try to get the VPC ID from the + `subnet_id`. -- `x509_upload_path` (string) - The path on the remote machine where the X509 +- `x509_upload_path` (string) - The path on the remote machine where the X509 certificate will be uploaded. This path must already exist and be writable. X509 certificates are uploaded after provisioning is run, so it is perfectly okay to create this directory as part of the provisioning process. Defaults to `/tmp`. -- `windows_password_timeout` (string) - The timeout for waiting for a Windows +- `windows_password_timeout` (string) - The timeout for waiting for a Windows password for Windows instances. Defaults to 20 minutes. Example value: `10m` ## Basic Example Here is a basic example. It is completely valid except for the access keys: -```json +``` json { "type": "amazon-instance", "access_key": "YOUR KEY HERE", @@ -381,7 +380,7 @@ Here is a basic example. It is completely valid except for the access keys: } ``` --> **Note:** Packer can also read the access key and secret access key from +-> **Note:** Packer can also read the access key and secret access key from environmental variables. See the configuration reference in the section above for more information on what environmental variables Packer will look for. @@ -416,7 +415,7 @@ multiple lines for convenience of reading. The bundle volume command is responsible for executing `ec2-bundle-vol` in order to store and image of the root filesystem to use to create the AMI. -```text +``` text sudo -i -n ec2-bundle-vol \ -k {{.KeyPath}} \ -u {{.AccountId}} \ @@ -432,7 +431,7 @@ sudo -i -n ec2-bundle-vol \ The available template variables should be self-explanatory based on the parameters they're used to satisfy the `ec2-bundle-vol` command. -~> **Warning!** Some versions of ec2-bundle-vol silently ignore all .pem and +~> **Warning!** Some versions of ec2-bundle-vol silently ignore all .pem and .gpg files during the bundling of the AMI, which can cause problems on some systems, such as Ubuntu. You may want to customize the bundle volume command to include those files (see the `--no-filter` option of `ec2-bundle-vol`). @@ -444,7 +443,7 @@ multiple lines for convenience of reading. Access key and secret key are omitted if using instance profile. The bundle upload command is responsible for taking the bundled volume and uploading it to S3. -```text +``` text sudo -i -n ec2-upload-bundle \ -b {{.BucketName}} \ -m {{.ManifestPath}} \ diff --git a/website/source/docs/builders/amazon.html.md b/website/source/docs/builders/amazon.html.md index 82d1ca41a..8873dfcbe 100644 --- a/website/source/docs/builders/amazon.html.md +++ b/website/source/docs/builders/amazon.html.md @@ -1,10 +1,10 @@ --- +description: | + Packer is able to create Amazon AMIs. To achieve this, Packer comes with + multiple builders depending on the strategy you want to use to build the AMI. layout: docs -sidebar_current: docs-builders-amazon -page_title: Amazon AMI - Builders -description: |- - Packer is able to create Amazon AMIs. To achieve this, Packer comes with - multiple builders depending on the strategy you want to use to build the AMI. +page_title: 'Amazon AMI - Builders' +sidebar_current: 'docs-builders-amazon' --- # Amazon AMI Builder @@ -34,7 +34,7 @@ Packer supports the following builders at the moment: not require running in AWS. This is an **advanced builder and should not be used by newcomers**. --> **Don't know which builder to use?** If in doubt, use the [amazon-ebs +-> **Don't know which builder to use?** If in doubt, use the [amazon-ebs builder](/docs/builders/amazon-ebs.html). It is much easier to use and Amazon generally recommends EBS-backed images nowadays. @@ -72,12 +72,12 @@ Credentials are resolved in the following order: Packer depends on the [AWS SDK](https://aws.amazon.com/documentation/sdk-for-go/) to perform automatic -lookup using _credential chains_. In short, the SDK looks for credentials in +lookup using *credential chains*. In short, the SDK looks for credentials in the following order: -1. Environment variables. -2. Shared credentials file. -3. If your application is running on an Amazon EC2 instance, IAM role for Amazon EC2. +1. Environment variables. +2. Shared credentials file. +3. If your application is running on an Amazon EC2 instance, IAM role for Amazon EC2. Please refer to the SDK's documentation on [specifying credentials](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials) @@ -93,7 +93,7 @@ the task's or instance's IAM role, if it has one. The following policy document provides the minimal set permissions necessary for Packer to work: -```json +``` json { "Version": "2012-10-17", "Statement": [{ @@ -152,7 +152,7 @@ The example policy below may help packer work with IAM roles. Note that this example provides more than the minimal set of permissions needed for packer to work, but specifics will depend on your use-case. -```json +``` json { "Sid": "PackerIAMPassRole", "Effect": "Allow", @@ -173,6 +173,6 @@ fail. If that's the case, you might see an error like this: ==> amazon-ebs: Error querying AMI: AuthFailure: AWS was not able to validate the provided access credentials If you suspect your system's date is wrong, you can compare it against -http://www.time.gov/. On Linux/OS X, you can run the `date` command to get the +. On Linux/OS X, you can run the `date` command to get the current time. If you're on Linux, you can try setting the time with ntp by running `sudo ntpd -q`. diff --git a/website/source/docs/builders/azure-setup.html.md b/website/source/docs/builders/azure-setup.html.md index 9917fc43f..53a77875e 100644 --- a/website/source/docs/builders/azure-setup.html.md +++ b/website/source/docs/builders/azure-setup.html.md @@ -1,35 +1,35 @@ --- +description: | + In order to build VMs in Azure, Packer needs various configuration options. + These options and how to obtain them are documented on this page. layout: docs -sidebar_current: docs-builders-azure-setup -page_title: Setup - Azure - Builders -description: |- - In order to build VMs in Azure, Packer needs various configuration options. - These options and how to obtain them are documented on this page. +page_title: 'Setup - Azure - Builders' +sidebar_current: 'docs-builders-azure-setup' --- # Authorizing Packer Builds in Azure In order to build VMs in Azure Packer needs 6 configuration options to be specified: -- `subscription_id` - UUID identifying your Azure subscription (where billing is handled) +- `subscription_id` - UUID identifying your Azure subscription (where billing is handled) -- `client_id` - UUID identifying the Active Directory service principal that will run your Packer builds +- `client_id` - UUID identifying the Active Directory service principal that will run your Packer builds -- `client_secret` - service principal secret / password +- `client_secret` - service principal secret / password -- `object_id` - service principal object id (OSType = Windows Only) +- `object_id` - service principal object id (OSType = Windows Only) -- `resource_group_name` - name of the resource group where your VHD(s) will be stored +- `resource_group_name` - name of the resource group where your VHD(s) will be stored -- `storage_account` - name of the storage account where your VHD(s) will be stored +- `storage_account` - name of the storage account where your VHD(s) will be stored --> Behind the scenes Packer uses the OAuth protocol to authenticate against Azure Active Directory and authorize requests to the Azure Service Management API. These topics are unnecessarily complicated so we will try to ignore them for the rest of this document.

You do not need to understand how OAuth works in order to use Packer with Azure, though the Active Directory terms "service principal" and "role" will be useful for understanding Azure's access policies. +-> Behind the scenes Packer uses the OAuth protocol to authenticate against Azure Active Directory and authorize requests to the Azure Service Management API. These topics are unnecessarily complicated so we will try to ignore them for the rest of this document.

You do not need to understand how OAuth works in order to use Packer with Azure, though the Active Directory terms "service principal" and "role" will be useful for understanding Azure's access policies. In order to get all of the items above, you will need a username and password for your Azure account. ## Device Login -Device login is an alternative way to authorize in Azure Packer. Device login only requires you to know your +Device login is an alternative way to authorize in Azure Packer. Device login only requires you to know your Subscription ID. (Device login is only supported for Linux based VMs.) Device login is intended for those who are first time users, and just want to ''kick the tires.'' We recommend the SPN approach if you intend to automate Packer, or for deploying Windows VMs. @@ -38,26 +38,26 @@ deploying Windows VMs. There are three pieces of information you must provide to enable device login mode. - 1. SubscriptionID - 1. Resource Group - parent resource group that Packer uses to build an image. - 1. Storage Account - storage account where the image will be placed. +1. SubscriptionID +2. Resource Group - parent resource group that Packer uses to build an image. +3. Storage Account - storage account where the image will be placed. -> Device login mode is enabled by not setting client_id and client_secret. +> Device login mode is enabled by not setting client\_id and client\_secret. -The device login flow asks that you open a web browser, navigate to http://aka.ms/devicelogin, and input the supplied +The device login flow asks that you open a web browser, navigate to , and input the supplied code. This authorizes the Packer for Azure application to act on your behalf. An OAuth token will be created, and stored in the user's home directory (~/.azure/packer/oauth-TenantID.json). This token is used if the token file exists, and it -is refreshed as necessary. The token file prevents the need to continually execute the device login flow. +is refreshed as necessary. The token file prevents the need to continually execute the device login flow. ## Install the Azure CLI To get the credentials above, we will need to install the Azure CLI. Please refer to Microsoft's official [installation guide](https://azure.microsoft.com/en-us/documentation/articles/xplat-cli-install/). --> The guides below also use a tool called [`jq`](https://stedolan.github.io/jq/) to simplify the output from the Azure CLI, though this is optional. If you use homebrew you can simply `brew install node jq`. +-> The guides below also use a tool called [`jq`](https://stedolan.github.io/jq/) to simplify the output from the Azure CLI, though this is optional. If you use homebrew you can simply `brew install node jq`. If you already have node.js installed you can use `npm` to install `azure-cli`: -```shell +``` shell $ npm install -g azure-cli --no-progress ``` @@ -73,26 +73,24 @@ If you want more control or the script does not work for you, you can also use t Login using the Azure CLI -```shell +``` shell $ azure config mode arm $ azure login -u USERNAME ``` Get your account information -```shell +``` shell $ azure account list --json | jq -r '.[].name' $ azure account set ACCOUNTNAME $ azure account show --json | jq -r ".[] | .id" ``` --> Throughout this document when you see a command pipe to `jq` you may instead omit `--json` and everything after it, but the output will be more verbose. For example you can simply run `azure account list` instead. +-> Throughout this document when you see a command pipe to `jq` you may instead omit `--json` and everything after it, but the output will be more verbose. For example you can simply run `azure account list` instead. This will print out one line that look like this: -``` -4f562e88-8caf-421a-b4da-e3f6786c52ec -``` + 4f562e88-8caf-421a-b4da-e3f6786c52ec This is your `subscription_id`. Note it for later. @@ -100,7 +98,7 @@ This is your `subscription_id`. Note it for later. A [resource group](https://azure.microsoft.com/en-us/documentation/articles/resource-group-overview/#resource-groups) is used to organize related resources. Resource groups and storage accounts are tied to a location. To see available locations, run: -```shell +``` shell $ azure location list # ... @@ -113,7 +111,7 @@ Your storage account (below) will need to use the same `GROUPNAME` and `LOCATION We will need to create a storage account where your Packer artifacts will be stored. We will create a `LRS` storage account which is the least expensive price/GB at the time of writing. -```shell +``` shell $ azure storage account create \ -g GROUPNAME \ -l LOCATION \ @@ -121,7 +119,7 @@ $ azure storage account create \ --kind storage STORAGENAME ``` --> `LRS` is meant as a literal "LRS" and not as a variable. +-> `LRS` is meant as a literal "LRS" and not as a variable. Make sure that `GROUPNAME` and `LOCATION` are the same as above. @@ -129,7 +127,7 @@ Make sure that `GROUPNAME` and `LOCATION` are the same as above. An application represents a way to authorize access to the Azure API. Note that you will need to specify a URL for your application (this is intended to be used for OAuth callbacks) but these do not actually need to be valid URLs. -```shell +``` shell $ azure ad app create \ -n APPNAME \ -i APPURL \ @@ -145,7 +143,7 @@ You cannot directly grant permissions to an application. Instead, you create a s First, get the `APPID` for the application we just created. -```shell +``` shell $ azure ad app list --json \ | jq '.[] | select(.displayName | contains("APPNAME")) | .appId' # ... @@ -157,7 +155,7 @@ $ azure ad sp create --applicationId APPID Finally, we will associate the proper permissions with our application's service principal. We're going to assign the `Owner` role to our Packer application and change the scope to manage our whole subscription. (The `Owner` role can be scoped to a specific resource group to further reduce the scope of the account.) This allows Packer to create temporary resource groups for each build. -```shell +``` shell $ azure role assignment create \ --spn APPURL \ -o "Owner" \ @@ -166,26 +164,25 @@ $ azure role assignment create \ There are a lot of pre-defined roles and you can define your own with more granular permissions, though this is out of scope. You can see a list of pre-configured roles via: -```shell +``` shell $ azure role list --json \ | jq ".[] | {name:.Name, description:.Description}" ``` - ### Configuring Packer Now (finally) everything has been setup in Azure. Let's get our configuration keys together: Get `subscription_id`: -```shell +``` shell $ azure account show --json \ | jq ".[] | .id" ``` Get `client_id` -```shell +``` shell $ azure ad app list --json \ | jq '.[] | select(.displayName | contains("APPNAME")) | .appId' ``` @@ -196,18 +193,18 @@ This cannot be retrieved. If you forgot this, you will have to delete and re-cre Get `object_id` (OSTYpe=Windows only) -```shell +``` shell azure ad sp show -n CLIENT_ID ``` Get `resource_group_name` -```shell +``` shell $ azure group list ``` Get `storage_account` -```shell +``` shell $ azure storage account list ``` diff --git a/website/source/docs/builders/azure.html.md b/website/source/docs/builders/azure.html.md index 0b63a7941..c69e7090b 100644 --- a/website/source/docs/builders/azure.html.md +++ b/website/source/docs/builders/azure.html.md @@ -1,9 +1,8 @@ --- +description: 'Packer supports building VHDs in Azure Resource manager.' layout: docs -sidebar_current: docs-builders-azure -page_title: Azure - Builders -description: |- - Packer supports building VHDs in Azure Resource manager. +page_title: 'Azure - Builders' +sidebar_current: 'docs-builders-azure' --- # Azure Resource Manager Builder @@ -26,100 +25,99 @@ builder. ### Required: -- `client_id` (string) The Active Directory service principal associated with your builder. +- `client_id` (string) The Active Directory service principal associated with your builder. -- `client_secret` (string) The password or secret for your service principal. +- `client_secret` (string) The password or secret for your service principal. -- `resource_group_name` (string) Resource group under which the final artifact will be stored. +- `resource_group_name` (string) Resource group under which the final artifact will be stored. -- `storage_account` (string) Storage account under which the final artifact will be stored. +- `storage_account` (string) Storage account under which the final artifact will be stored. -- `subscription_id` (string) Subscription under which the build will be performed. **The service principal specified in `client_id` must have full access to this subscription.** +- `subscription_id` (string) Subscription under which the build will be performed. **The service principal specified in `client_id` must have full access to this subscription.** -- `capture_container_name` (string) Destination container name. Essentially the "directory" where your VHD will be organized in Azure. The captured VHD's URL will be https://.blob.core.windows.net/system/Microsoft.Compute/Images//.xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.vhd. +- `capture_container_name` (string) Destination container name. Essentially the "directory" where your VHD will be organized in Azure. The captured VHD's URL will be .blob.core.windows.net/system/Microsoft.Compute/Images//.xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.vhd. -- `capture_name_prefix` (string) VHD prefix. The final artifacts will be named `PREFIX-osDisk.UUID` and `PREFIX-vmTemplate.UUID`. +- `capture_name_prefix` (string) VHD prefix. The final artifacts will be named `PREFIX-osDisk.UUID` and `PREFIX-vmTemplate.UUID`. -- `image_publisher` (string) PublisherName for your base image. See [documentation](https://azure.microsoft.com/en-us/documentation/articles/resource-groups-vm-searching/) for details. +- `image_publisher` (string) PublisherName for your base image. See [documentation](https://azure.microsoft.com/en-us/documentation/articles/resource-groups-vm-searching/) for details. CLI example `azure vm image list-publishers -l westus` -- `image_offer` (string) Offer for your base image. See [documentation](https://azure.microsoft.com/en-us/documentation/articles/resource-groups-vm-searching/) for details. +- `image_offer` (string) Offer for your base image. See [documentation](https://azure.microsoft.com/en-us/documentation/articles/resource-groups-vm-searching/) for details. CLI example `azure vm image list-offers -l westus -p Canonical` -- `image_sku` (string) SKU for your base image. See [documentation](https://azure.microsoft.com/en-us/documentation/articles/resource-groups-vm-searching/) for details. +- `image_sku` (string) SKU for your base image. See [documentation](https://azure.microsoft.com/en-us/documentation/articles/resource-groups-vm-searching/) for details. CLI example `azure vm image list-skus -l westus -p Canonical -o UbuntuServer` -- `location` (string) Azure datacenter in which your VM will build. +- `location` (string) Azure datacenter in which your VM will build. CLI example `azure location list` ### Optional: -- `azure_tags` (object of name/value strings) - the user can define up to 15 tags. Tag names cannot exceed 512 - characters, and tag values cannot exceed 256 characters. Tags are applied to every resource deployed by a Packer +- `azure_tags` (object of name/value strings) - the user can define up to 15 tags. Tag names cannot exceed 512 + characters, and tag values cannot exceed 256 characters. Tags are applied to every resource deployed by a Packer build, i.e. Resource Group, VM, NIC, VNET, Public IP, KeyVault, etc. -- `cloud_environment_name` (string) One of `Public`, `China`, `Germany`, or +- `cloud_environment_name` (string) One of `Public`, `China`, `Germany`, or `USGovernment`. Defaults to `Public`. Long forms such as `USGovernmentCloud` and `AzureUSGovernmentCloud` are also supported. -- `image_version` (string) Specify a specific version of an OS to boot from. Defaults to `latest`. There may be a - difference in versions available across regions due to image synchronization latency. To ensure a consistent - version across regions set this value to one that is available in all regions where you are deploying. +- `image_version` (string) Specify a specific version of an OS to boot from. Defaults to `latest`. There may be a + difference in versions available across regions due to image synchronization latency. To ensure a consistent + version across regions set this value to one that is available in all regions where you are deploying. CLI example `azure vm image list -l westus -p Canonical -o UbuntuServer -k 16.04.0-LTS` -- `image_url` (string) Specify a custom VHD to use. If this value is set, do not set image_publisher, image_offer, - image_sku, or image_version. +- `image_url` (string) Specify a custom VHD to use. If this value is set, do not set image\_publisher, image\_offer, + image\_sku, or image\_version. -- `temp_compute_name` (string) temporary name assigned to the VM. If this value is not set, a random value will be assigned. Knowing the resource group and VM name allows one to execute commands to update the VM during a Packer build, e.g. attach a resource disk to the VM. +- `temp_compute_name` (string) temporary name assigned to the VM. If this value is not set, a random value will be assigned. Knowing the resource group and VM name allows one to execute commands to update the VM during a Packer build, e.g. attach a resource disk to the VM. -- `temp_resource_group_name` (string) temporary name assigned to the resource group. If this value is not set, a random value will be assigned. +- `temp_resource_group_name` (string) temporary name assigned to the resource group. If this value is not set, a random value will be assigned. -- `tenant_id` (string) The account identifier with which your `client_id` and `subscription_id` are associated. If not - specified, `tenant_id` will be looked up using `subscription_id`. +- `tenant_id` (string) The account identifier with which your `client_id` and `subscription_id` are associated. If not + specified, `tenant_id` will be looked up using `subscription_id`. -- `object_id` (string) Specify an OAuth Object ID to protect WinRM certificates - created at runtime. This variable is required when creating images based on - Windows; this variable is not used by non-Windows builds. See `Windows` +- `object_id` (string) Specify an OAuth Object ID to protect WinRM certificates + created at runtime. This variable is required when creating images based on + Windows; this variable is not used by non-Windows builds. See `Windows` behavior for `os_type`, below. -- `os_type` (string) If either `Linux` or `Windows` is specified Packer will +- `os_type` (string) If either `Linux` or `Windows` is specified Packer will automatically configure authentication credentials for the provisioned machine. For `Linux` this configures an SSH authorized key. For `Windows` this configures a WinRM certificate. -- `os_disk_size_gb` (int32) Specify the size of the OS disk in GB (gigabytes). Values of zero or less than zero are +- `os_disk_size_gb` (int32) Specify the size of the OS disk in GB (gigabytes). Values of zero or less than zero are ignored. -- `virtual_network_name` (string) Use a pre-existing virtual network for the VM. This option enables private - communication with the VM, no public IP address is **used** or **provisioned**. This value should only be set if +- `virtual_network_name` (string) Use a pre-existing virtual network for the VM. This option enables private + communication with the VM, no public IP address is **used** or **provisioned**. This value should only be set if Packer is executed from a host on the same subnet / virtual network. -- `virtual_network_resource_group_name` (string) If virtual_network_name is set, this value **may** also be set. If - virtual_network_name is set, and this value is not set the builder attempts to determine the resource group - containing the virtual network. If the resource group cannot be found, or it cannot be disambiguated, this value +- `virtual_network_resource_group_name` (string) If virtual\_network\_name is set, this value **may** also be set. If + virtual\_network\_name is set, and this value is not set the builder attempts to determine the resource group + containing the virtual network. If the resource group cannot be found, or it cannot be disambiguated, this value should be set. -- `virtual_network_subnet_name` (string) If virtual_network_name is set, this value **may** also be set. If - virtual_network_name is set, and this value is not set the builder attempts to determine the subnet to use with - the virtual network. If the subnet cannot be found, or it cannot be disambiguated, this value should be set. +- `virtual_network_subnet_name` (string) If virtual\_network\_name is set, this value **may** also be set. If + virtual\_network\_name is set, and this value is not set the builder attempts to determine the subnet to use with + the virtual network. If the subnet cannot be found, or it cannot be disambiguated, this value should be set. -- `vm_size` (string) Size of the VM used for building. This can be changed +- `vm_size` (string) Size of the VM used for building. This can be changed when you deploy a VM from your VHD. See [pricing](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/) information. Defaults to `Standard_A1`. CLI example `azure vm sizes -l westus` - ## Basic Example Here is a basic example for Azure. -```json +``` json { "type": "azure-arm", @@ -149,15 +147,15 @@ Here is a basic example for Azure. ## Deprovision -Azure VMs should be deprovisioned at the end of every build. For Windows this means executing sysprep, and for Linux this means executing the waagent deprovision process. +Azure VMs should be deprovisioned at the end of every build. For Windows this means executing sysprep, and for Linux this means executing the waagent deprovision process. Please refer to the Azure [examples](https://github.com/hashicorp/packer/tree/master/examples/azure) for complete examples showing the deprovision process. ### Windows -The following provisioner snippet shows how to sysprep a Windows VM. Deprovision should be the last operation executed by a build. +The following provisioner snippet shows how to sysprep a Windows VM. Deprovision should be the last operation executed by a build. -```json +``` json { "provisioners": [ { @@ -173,9 +171,9 @@ The following provisioner snippet shows how to sysprep a Windows VM. Deprovisio ### Linux -The following provisioner snippet shows how to deprovision a Linux VM. Deprovision should be the last operation executed by a build. +The following provisioner snippet shows how to deprovision a Linux VM. Deprovision should be the last operation executed by a build. -```json +``` json { "provisioners": [ { @@ -192,70 +190,68 @@ The following provisioner snippet shows how to deprovision a Linux VM. Deprovis To learn more about the Linux deprovision process please see WALinuxAgent's [README](https://github.com/Azure/WALinuxAgent/blob/master/README.md). -#### skip_clean +#### skip\_clean -Customers have reported issues with the deprovision process where the builder hangs. The error message is similar to the following. +Customers have reported issues with the deprovision process where the builder hangs. The error message is similar to the following. -``` -Build 'azure-arm' errored: Retryable error: Error removing temporary script at /tmp/script_9899.sh: ssh: handshake failed: EOF -``` + Build 'azure-arm' errored: Retryable error: Error removing temporary script at /tmp/script_9899.sh: ssh: handshake failed: EOF -One solution is to set skip_clean to true in the provisioner. This prevents Packer from cleaning up any helper scripts uploaded to the VM during the build. +One solution is to set skip\_clean to true in the provisioner. This prevents Packer from cleaning up any helper scripts uploaded to the VM during the build. ## Defaults -The Azure builder attempts to pick default values that provide for a just works experience. These values can be changed by the user to more suitable values. +The Azure builder attempts to pick default values that provide for a just works experience. These values can be changed by the user to more suitable values. - * The default user name is packer not root as in other builders. Most distros on Azure do not allow root to SSH to a VM hence the need for a non-root default user. Set the ssh_username option to override the default value. - * The default VM size is Standard_A1. Set the vm_size option to override the default value. - * The default image version is latest. Set the image_version option to override the default value. +- The default user name is packer not root as in other builders. Most distros on Azure do not allow root to SSH to a VM hence the need for a non-root default user. Set the ssh\_username option to override the default value. +- The default VM size is Standard\_A1. Set the vm\_size option to override the default value. +- The default image version is latest. Set the image\_version option to override the default value. ## Implementation -~> **Warning!** This is an advanced topic. You do not need to understand the implementation to use the Azure +~> **Warning!** This is an advanced topic. You do not need to understand the implementation to use the Azure builder. The Azure builder uses ARM [templates](https://azure.microsoft.com/en-us/documentation/articles/resource-group-authoring-templates/) to deploy -resources. ARM templates allow you to express the what without having to express the how. +resources. ARM templates allow you to express the what without having to express the how. -The Azure builder works under the assumption that it creates everything it needs to execute a build. When the build has -completed it simply deletes the resource group to cleanup any runtime resources. Resource groups are named using the +The Azure builder works under the assumption that it creates everything it needs to execute a build. When the build has +completed it simply deletes the resource group to cleanup any runtime resources. Resource groups are named using the form `packer-Resource-Group-`. The value `` is a random value that is generated at every invocation of -packer. The `` value is re-used as much as possible when naming resources, so users can better identify and +packer. The `` value is re-used as much as possible when naming resources, so users can better identify and group these transient resources when seen in their subscription. - > The VHD is created on a user specified storage account, not a random one created at runtime. When a virtual machine - is captured the resulting VHD is stored on the same storage account as the source VHD. The VHD created by Packer must - persist after a build is complete, which is why the storage account is set by the user. +> The VHD is created on a user specified storage account, not a random one created at runtime. When a virtual machine +> is captured the resulting VHD is stored on the same storage account as the source VHD. The VHD created by Packer must +> persist after a build is complete, which is why the storage account is set by the user. The basic steps for a build are: - 1. Create a resource group. - 1. Validate and deploy a VM template. - 1. Execute provision - defined by the user; typically shell commands. - 1. Power off and capture the VM. - 1. Delete the resource group. - 1. Delete the temporary VM's OS disk. +1. Create a resource group. +2. Validate and deploy a VM template. +3. Execute provision - defined by the user; typically shell commands. +4. Power off and capture the VM. +5. Delete the resource group. +6. Delete the temporary VM's OS disk. -The templates used for a build are currently fixed in the code. There is a template for Linux, Windows, and KeyVault. +The templates used for a build are currently fixed in the code. There is a template for Linux, Windows, and KeyVault. The templates are themselves templated with place holders for names, passwords, SSH keys, certificates, etc. ### What's Randomized? The Azure builder creates the following random values at runtime. - * Administrator Password: a random 32-character value using the *password alphabet*. - * Certificate: a 2,048-bit certificate used to secure WinRM communication. The certificate is valid for 24-hours, which starts roughly at invocation time. - * Certificate Password: a random 32-character value using the *password alphabet* used to protect the private key of the certificate. - * Compute Name: a random 15-character name prefixed with pkrvm; the name of the VM. - * Deployment Name: a random 15-character name prefixed with pkfdp; the name of the deployment. - * KeyVault Name: a random 15-character name prefixed with pkrkv. - * OS Disk Name: a random 15-character name prefixed with pkros. - * Resource Group Name: a random 33-character name prefixed with packer-Resource-Group-. - * SSH Key Pair: a 2,048-bit asymmetric key pair; can be overriden by the user. +- Administrator Password: a random 32-character value using the *password alphabet*. +- Certificate: a 2,048-bit certificate used to secure WinRM communication. The certificate is valid for 24-hours, which starts roughly at invocation time. +- Certificate Password: a random 32-character value using the *password alphabet* used to protect the private key of the certificate. +- Compute Name: a random 15-character name prefixed with pkrvm; the name of the VM. +- Deployment Name: a random 15-character name prefixed with pkfdp; the name of the deployment. +- KeyVault Name: a random 15-character name prefixed with pkrkv. +- OS Disk Name: a random 15-character name prefixed with pkros. +- Resource Group Name: a random 33-character name prefixed with packer-Resource-Group-. +- SSH Key Pair: a 2,048-bit asymmetric key pair; can be overriden by the user. -The default alphabet used for random values is **0123456789bcdfghjklmnpqrstvwxyz**. The alphabet was reduced (no +The default alphabet used for random values is **0123456789bcdfghjklmnpqrstvwxyz**. The alphabet was reduced (no vowels) to prevent running afoul of Azure decency controls. The password alphabet used for random values is **0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ**. @@ -263,28 +259,28 @@ The password alphabet used for random values is **0123456789abcdefghijklmnopqrst ### Windows The Windows implementation is very similar to the Linux build, with the exception that it deploys a template to -configure KeyVault. Packer communicates with a Windows VM using the WinRM protocol. Windows VMs on Azure default to -using both password and certificate based authentication for WinRM. The password is easily set via the VM ARM template, -but the certificate requires an intermediary. The intermediary for Azure is KeyVault. The certificate is uploaded to a -new KeyVault provisioned in the same resource group as the VM. When the Windows VM is deployed, it links to the +configure KeyVault. Packer communicates with a Windows VM using the WinRM protocol. Windows VMs on Azure default to +using both password and certificate based authentication for WinRM. The password is easily set via the VM ARM template, +but the certificate requires an intermediary. The intermediary for Azure is KeyVault. The certificate is uploaded to a +new KeyVault provisioned in the same resource group as the VM. When the Windows VM is deployed, it links to the certificate in KeyVault, and Azure will ensure the certificate is injected as part of deployment. The basic steps for a Windows build are: - 1. Create a resource group. - 1. Validate and deploy a KeyVault template. - 1. Validate and deploy a VM template. - 1. Execute provision - defined by the user; typically shell commands. - 1. Power off and capture the VM. - 1. Delete the resource group. - 1. Delete the temporary VM's OS disk. +1. Create a resource group. +2. Validate and deploy a KeyVault template. +3. Validate and deploy a VM template. +4. Execute provision - defined by the user; typically shell commands. +5. Power off and capture the VM. +6. Delete the resource group. +7. Delete the temporary VM's OS disk. -A Windows build requires two templates and two deployments. Unfortunately, the KeyVault and VM cannot be deployed at -the same time hence the need for two templates and deployments. The time required to deploy a KeyVault template is +A Windows build requires two templates and two deployments. Unfortunately, the KeyVault and VM cannot be deployed at +the same time hence the need for two templates and deployments. The time required to deploy a KeyVault template is minimal, so overall impact is small. - > The KeyVault certificate is protected using the object_id of the SPN. This is why Windows builds require object_id, - and an SPN. The KeyVault is deleted when the resource group is deleted. +> The KeyVault certificate is protected using the object\_id of the SPN. This is why Windows builds require object\_id, +> and an SPN. The KeyVault is deleted when the resource group is deleted. See the [examples/azure](https://github.com/hashicorp/packer/tree/master/examples/azure) folder in the packer project for more examples. diff --git a/website/source/docs/builders/cloudstack.html.md b/website/source/docs/builders/cloudstack.html.md index 07559d6ac..8c2d4d3b6 100644 --- a/website/source/docs/builders/cloudstack.html.md +++ b/website/source/docs/builders/cloudstack.html.md @@ -1,12 +1,12 @@ --- +description: | + The cloudstack Packer builder is able to create new templates for use with + CloudStack. The builder takes either an ISO or an existing template as it's + source, runs any provisioning necessary on the instance after launching it and + then creates a new template from that instance. layout: docs -sidebar_current: docs-builders-cloudstack -page_title: CloudStack - Builders -description: |- - The cloudstack Packer builder is able to create new templates for use with - CloudStack. The builder takes either an ISO or an existing template as it's - source, runs any provisioning necessary on the instance after launching it and - then creates a new template from that instance. +page_title: 'CloudStack - Builders' +sidebar_current: 'docs-builders-cloudstack' --- # CloudStack Builder @@ -33,101 +33,101 @@ builder. ### Required: -- `api_url` (string) - The CloudStack API endpoint we will connect to. +- `api_url` (string) - The CloudStack API endpoint we will connect to. -- `api_key` (string) - The API key used to sign all API requests. +- `api_key` (string) - The API key used to sign all API requests. -- `cidr_list` (array) - List of CIDR's that will have access to the new +- `cidr_list` (array) - List of CIDR's that will have access to the new instance. This is needed in order for any provisioners to be able to connect to the instance. Usually this will be the NAT address of your current location. Only required when `use_local_ip_address` is `false`. -- `instance_name` (string) - The name of the instance. Defaults to +- `instance_name` (string) - The name of the instance. Defaults to "packer-UUID" where UUID is dynamically generated. -- `network` (string) - The name or ID of the network to connect the instance +- `network` (string) - The name or ID of the network to connect the instance to. -- `secret_key` (string) - The secret key used to sign all API requests. +- `secret_key` (string) - The secret key used to sign all API requests. -- `service_offering` (string) - The name or ID of the service offering used +- `service_offering` (string) - The name or ID of the service offering used for the instance. -- `soure_iso` (string) - The name or ID of an ISO that will be mounted before +- `soure_iso` (string) - The name or ID of an ISO that will be mounted before booting the instance. This option is mutual exclusive with `source_template`. -- `source_template` (string) - The name or ID of the template used as base +- `source_template` (string) - The name or ID of the template used as base template for the instance. This option is mutual explusive with `source_iso`. -- `template_name` (string) - The name of the new template. Defaults to +- `template_name` (string) - The name of the new template. Defaults to "packer-{{timestamp}}" where timestamp will be the current time. -- `template_display_text` (string) - The display text of the new template. +- `template_display_text` (string) - The display text of the new template. Defaults to the `template_name`. -- `template_os` (string) - The name or ID of the template OS for the new +- `template_os` (string) - The name or ID of the template OS for the new template that will be created. -- `zone` (string) - The name or ID of the zone where the instance will be +- `zone` (string) - The name or ID of the zone where the instance will be created. ### Optional: -- `async_timeout` (int) - The time duration to wait for async calls to +- `async_timeout` (int) - The time duration to wait for async calls to finish. Defaults to 30m. -- `disk_offering` (string) - The name or ID of the disk offering used for the +- `disk_offering` (string) - The name or ID of the disk offering used for the instance. This option is only available (and also required) when using `source_iso`. -- `disk_size` (int) - The size (in GB) of the root disk of the new instance. +- `disk_size` (int) - The size (in GB) of the root disk of the new instance. This option is only available when using `source_template`. -- `http_get_only` (boolean) - Some cloud providers only allow HTTP GET calls to +- `http_get_only` (boolean) - Some cloud providers only allow HTTP GET calls to their CloudStack API. If using such a provider, you need to set this to `true` in order for the provider to only make GET calls and no POST calls. -- `hypervisor` (string) - The target hypervisor (e.g. `XenServer`, `KVM`) for +- `hypervisor` (string) - The target hypervisor (e.g. `XenServer`, `KVM`) for the new template. This option is required when using `source_iso`. -- `keypair` (string) - The name of the SSH key pair that will be used to +- `keypair` (string) - The name of the SSH key pair that will be used to access the instance. The SSH key pair is assumed to be already available within CloudStack. -- `project` (string) - The name or ID of the project to deploy the instance to. +- `project` (string) - The name or ID of the project to deploy the instance to. -- `public_ip_address` (string) - The public IP address or it's ID used for +- `public_ip_address` (string) - The public IP address or it's ID used for connecting any provisioners to. If not provided, a temporary public IP address will be associated and released during the Packer run. -- `ssl_no_verify` (boolean) - Set to `true` to skip SSL verification. Defaults +- `ssl_no_verify` (boolean) - Set to `true` to skip SSL verification. Defaults to `false`. -- `template_featured` (boolean) - Set to `true` to indicate that the template +- `template_featured` (boolean) - Set to `true` to indicate that the template is featured. Defaults to `false`. -- `template_public` (boolean) - Set to `true` to indicate that the template is +- `template_public` (boolean) - Set to `true` to indicate that the template is available for all accounts. Defaults to `false`. -- `template_password_enabled` (boolean) - Set to `true` to indicate the template +- `template_password_enabled` (boolean) - Set to `true` to indicate the template should be password enabled. Defaults to `false`. -- `template_requires_hvm` (boolean) - Set to `true` to indicate the template +- `template_requires_hvm` (boolean) - Set to `true` to indicate the template requires hardware-assisted virtualization. Defaults to `false`. -- `template_scalable` (boolean) - Set to `true` to indicate that the template +- `template_scalable` (boolean) - Set to `true` to indicate that the template contains tools to support dynamic scaling of VM cpu/memory. Defaults to `false`. -- `user_data` (string) - User data to launch with the instance. +- `user_data` (string) - User data to launch with the instance. -- `use_local_ip_address` (boolean) - Set to `true` to indicate that the +- `use_local_ip_address` (boolean) - Set to `true` to indicate that the provisioners should connect to the local IP address of the instance. ## Basic Example Here is a basic example. -```json +``` json { "type": "cloudstack", "api_url": "https://cloudstack.company.com/client/api", diff --git a/website/source/docs/builders/custom.html.md b/website/source/docs/builders/custom.html.md index 64439ebc0..d5948843e 100644 --- a/website/source/docs/builders/custom.html.md +++ b/website/source/docs/builders/custom.html.md @@ -1,11 +1,11 @@ --- +description: | + Packer is extensible, allowing you to write new builders without having to + modify the core source code of Packer itself. Documentation for creating new + builders is covered in the custom builders page of the Packer plugin section. layout: docs -sidebar_current: docs-builders-custom -page_title: Custom - Builders -description: |- - Packer is extensible, allowing you to write new builders without having to - modify the core source code of Packer itself. Documentation for creating new - builders is covered in the custom builders page of the Packer plugin section. +page_title: 'Custom - Builders' +sidebar_current: 'docs-builders-custom' --- # Custom Builder diff --git a/website/source/docs/builders/digitalocean.html.md b/website/source/docs/builders/digitalocean.html.md index c0bac0f21..f42186af4 100644 --- a/website/source/docs/builders/digitalocean.html.md +++ b/website/source/docs/builders/digitalocean.html.md @@ -1,16 +1,15 @@ --- +description: | + The digitalocean Packer builder is able to create new images for use with + DigitalOcean. The builder takes a source image, runs any provisioning + necessary on the image after launching it, then snapshots it into a reusable + image. This reusable image can then be used as the foundation of new servers + that are launched within DigitalOcean. layout: docs -sidebar_current: docs-builders-digitalocean -page_title: DigitalOcean - Builders -description: |- - The digitalocean Packer builder is able to create new images for use with - DigitalOcean. The builder takes a source image, runs any provisioning - necessary on the image after launching it, then snapshots it into a reusable - image. This reusable image can then be used as the foundation of new servers - that are launched within DigitalOcean. +page_title: 'DigitalOcean - Builders' +sidebar_current: 'docs-builders-digitalocean' --- - # DigitalOcean Builder Type: `digitalocean` @@ -36,63 +35,62 @@ builder. ### Required: -- `api_token` (string) - The client TOKEN to use to access your account. It +- `api_token` (string) - The client TOKEN to use to access your account. It can also be specified via environment variable `DIGITALOCEAN_API_TOKEN`, if set. -- `image` (string) - The name (or slug) of the base image to use. This is the +- `image` (string) - The name (or slug) of the base image to use. This is the image that will be used to launch a new droplet and provision it. See - [https://developers.digitalocean.com/documentation/v2/\#list-all-images](https://developers.digitalocean.com/documentation/v2/#list-all-images) for + for details on how to get a list of the accepted image names/slugs. -- `region` (string) - The name (or slug) of the region to launch the +- `region` (string) - The name (or slug) of the region to launch the droplet in. Consequently, this is the region where the snapshot will be available. See - [https://developers.digitalocean.com/documentation/v2/\#list-all-regions](https://developers.digitalocean.com/documentation/v2/#list-all-regions) for + for the accepted region names/slugs. -- `size` (string) - The name (or slug) of the droplet size to use. See - [https://developers.digitalocean.com/documentation/v2/\#list-all-sizes](https://developers.digitalocean.com/documentation/v2/#list-all-sizes) for +- `size` (string) - The name (or slug) of the droplet size to use. See + for the accepted size names/slugs. ### Optional: -- `api_url` (string) - Non standard api endpoint URL. Set this if you are +- `api_url` (string) - Non standard api endpoint URL. Set this if you are using a DigitalOcean API compatible service. It can also be specified via environment variable `DIGITALOCEAN_API_URL`. -- `droplet_name` (string) - The name assigned to the droplet. DigitalOcean +- `droplet_name` (string) - The name assigned to the droplet. DigitalOcean sets the hostname of the machine to this value. -- `private_networking` (boolean) - Set to `true` to enable private networking +- `private_networking` (boolean) - Set to `true` to enable private networking for the droplet being created. This defaults to `false`, or not enabled. -- `monitoring` (boolean) - Set to `true` to enable monitoring +- `monitoring` (boolean) - Set to `true` to enable monitoring for the droplet being created. This defaults to `false`, or not enabled. -- `snapshot_name` (string) - The name of the resulting snapshot that will +- `snapshot_name` (string) - The name of the resulting snapshot that will appear in your account. This must be unique. To help make this unique, use a function like `timestamp` (see [configuration templates](/docs/templates/engine.html) for more info) -- `snapshot_regions` (array of strings) - The regions of the resulting snapshot that will +- `snapshot_regions` (array of strings) - The regions of the resulting snapshot that will appear in your account. -- `state_timeout` (string) - The time to wait, as a duration string, for a +- `state_timeout` (string) - The time to wait, as a duration string, for a droplet to enter a desired state (such as "active") before timing out. The default state timeout is "6m". -- `user_data` (string) - User data to launch with the Droplet. -- `user_data_file` (string) - Path to a file that will be used for the user +- `user_data` (string) - User data to launch with the Droplet. +- `user_data_file` (string) - Path to a file that will be used for the user data when launching the Droplet. - ## Basic Example Here is a basic example. It is completely valid as soon as you enter your own access tokens: -```json +``` json { "type": "digitalocean", "api_token": "YOUR API KEY", diff --git a/website/source/docs/builders/docker.html.md b/website/source/docs/builders/docker.html.md index 8527becf8..838775704 100644 --- a/website/source/docs/builders/docker.html.md +++ b/website/source/docs/builders/docker.html.md @@ -1,11 +1,11 @@ --- +description: | + The docker Packer builder builds Docker images using Docker. The builder + starts a Docker container, runs provisioners within this container, then + exports the container for reuse or commits the image. layout: docs -sidebar_current: docs-builders-docker -page_title: Docker - Builders -description: |- - The docker Packer builder builds Docker images using Docker. The builder - starts a Docker container, runs provisioners within this container, then - exports the container for reuse or commits the image. +page_title: 'Docker - Builders' +sidebar_current: 'docs-builders-docker' --- # Docker Builder @@ -33,7 +33,7 @@ what [platforms Docker supports and how to install onto them](https://docs.docke Below is a fully functioning example. It doesn't do anything useful, since no provisioners are defined, but it will effectively repackage an image. -```json +``` json { "type": "docker", "image": "ubuntu", @@ -47,7 +47,7 @@ Below is another example, the same as above but instead of exporting the running container, this one commits the container to an image. The image can then be more easily tagged, pushed, etc. -```json +``` json { "type": "docker", "image": "ubuntu", @@ -66,7 +66,7 @@ Docker](https://docs.docker.com/engine/reference/commandline/commit/). Example uses of all of the options, assuming one is building an NGINX image from ubuntu as an simple example: -```json +``` json { "type": "docker", "image": "ubuntu", @@ -88,37 +88,37 @@ from ubuntu as an simple example: Allowed metadata fields that can be changed are: -- CMD - - String, supports both array (escaped) and string form - - EX: `"CMD [\"nginx\", \"-g\", \"daemon off;\"]"` - - EX: `"CMD nginx -g daemon off;"` -- ENTRYPOINT - - String - - EX: `"ENTRYPOINT /var/www/start.sh"` -- ENV - - String, note there is no equal sign: - - EX: `"ENV HOSTNAME www.example.com"` not `"ENV HOSTNAME=www.example.com"` -- EXPOSE - - String, space separated ports - - EX: `"EXPOSE 80 443"` -- LABEL - - String, space separated key=value pairs - - EX: `"LABEL version=1.0"` -- ONBUILD - - String - - EX: `"ONBUILD RUN date"` -- MAINTAINER - - String, deprecated in Docker version 1.13.0 - - EX: `"MAINTAINER NAME"` -- USER - - String - - EX: `"USER USERNAME"` -- VOLUME - - String - - EX: `"VOLUME FROM TO"` -- WORKDIR - - String - - EX: `"WORKDIR PATH"` +- CMD + - String, supports both array (escaped) and string form + - EX: `"CMD [\"nginx\", \"-g\", \"daemon off;\"]"` + - EX: `"CMD nginx -g daemon off;"` +- ENTRYPOINT + - String + - EX: `"ENTRYPOINT /var/www/start.sh"` +- ENV + - String, note there is no equal sign: + - EX: `"ENV HOSTNAME www.example.com"` not `"ENV HOSTNAME=www.example.com"` +- EXPOSE + - String, space separated ports + - EX: `"EXPOSE 80 443"` +- LABEL + - String, space separated key=value pairs + - EX: `"LABEL version=1.0"` +- ONBUILD + - String + - EX: `"ONBUILD RUN date"` +- MAINTAINER + - String, deprecated in Docker version 1.13.0 + - EX: `"MAINTAINER NAME"` +- USER + - String + - EX: `"USER USERNAME"` +- VOLUME + - String + - EX: `"VOLUME FROM TO"` +- WORKDIR + - String + - EX: `"WORKDIR PATH"` ## Configuration Reference @@ -134,40 +134,40 @@ builder. You must specify (only) one of `commit`, `discard`, or `export_path`. -- `commit` (boolean) - If true, the container will be committed to an image +- `commit` (boolean) - If true, the container will be committed to an image rather than exported. -- `discard` (boolean) - Throw away the container when the build is complete. +- `discard` (boolean) - Throw away the container when the build is complete. This is useful for the [artifice post-processor](https://www.packer.io/docs/post-processors/artifice.html). -- `export_path` (string) - The path where the final container will be exported +- `export_path` (string) - The path where the final container will be exported as a tar file. -- `image` (string) - The base image for the Docker container that will +- `image` (string) - The base image for the Docker container that will be started. This image will be pulled from the Docker registry if it doesn't already exist. ### Optional: -- `author` (string) - Set the author (e-mail) of a commit. +- `author` (string) - Set the author (e-mail) of a commit. -- `aws_access_key` (string) - The AWS access key used to communicate with AWS. +- `aws_access_key` (string) - The AWS access key used to communicate with AWS. [Learn how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials) -- `aws_secret_key` (string) - The AWS secret key used to communicate with AWS. +- `aws_secret_key` (string) - The AWS secret key used to communicate with AWS. [Learn how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials) -- `aws_token` (string) - The AWS access token to use. This is different from the +- `aws_token` (string) - The AWS access token to use. This is different from the access key and secret key. If you're not sure what this is, then you probably don't need it. This will also be read from the `AWS_SESSION_TOKEN` environmental variable. -- `changes` (array of strings) - Dockerfile instructions to add to the commit. +- `changes` (array of strings) - Dockerfile instructions to add to the commit. Example of instructions are `CMD`, `ENTRYPOINT`, `ENV`, and `EXPOSE`. Example: `[ "USER ubuntu", "WORKDIR /app", "EXPOSE 8080" ]` -- `ecr_login` (boolean) - Defaults to false. If true, the builder will login in +- `ecr_login` (boolean) - Defaults to false. If true, the builder will login in order to pull the image from [Amazon EC2 Container Registry (ECR)](https://aws.amazon.com/ecr/). The builder only logs in for the duration of the pull. If true @@ -175,33 +175,33 @@ You must specify (only) one of `commit`, `discard`, or `export_path`. `login_password` will be ignored. For more information see the [section on ECR](#amazon-ec2-container-registry). -- `login` (boolean) - Defaults to false. If true, the builder will login in +- `login` (boolean) - Defaults to false. If true, the builder will login in order to pull the image. The builder only logs in for the duration of the pull. It always logs out afterwards. For log into ECR see `ecr_login`. -- `login_email` (string) - The email to use to authenticate to login. +- `login_email` (string) - The email to use to authenticate to login. -- `login_username` (string) - The username to use to authenticate to login. +- `login_username` (string) - The username to use to authenticate to login. -- `login_password` (string) - The password to use to authenticate to login. +- `login_password` (string) - The password to use to authenticate to login. -- `login_server` (string) - The server address to login to. +- `login_server` (string) - The server address to login to. -- `message` (string) - Set a message for the commit. +- `message` (string) - Set a message for the commit. -- `privileged` (boolean) - If true, run the docker container with the +- `privileged` (boolean) - If true, run the docker container with the `--privileged` flag. This defaults to false if not set. -- `pull` (boolean) - If true, the configured image will be pulled using +- `pull` (boolean) - If true, the configured image will be pulled using `docker pull` prior to use. Otherwise, it is assumed the image already exists and can be used. This defaults to true if not set. -- `run_command` (array of strings) - An array of arguments to pass to +- `run_command` (array of strings) - An array of arguments to pass to `docker run` in order to run the container. By default this is set to `["-d", "-i", "-t", "{{.Image}}", "/bin/bash"]`. As you can see, you have a couple template variables to customize, as well. -- `volumes` (map of strings to strings) - A mapping of additional volumes to +- `volumes` (map of strings to strings) - A mapping of additional volumes to mount into this container. The key of the object is the host path, the value is the container path. @@ -221,7 +221,7 @@ created image. This is accomplished using a sequence definition (a collection of post-processors that are treated as as single pipeline, see [Post-Processors](/docs/templates/post-processors.html) for more information): -```json +``` json { "post-processors": [ [ @@ -245,7 +245,7 @@ pushing the image to a container repository. If you want to do this manually, however, perhaps from a script, you can import the image using the process below: -```shell +``` shell $ docker import - registry.mydomain.com/mycontainer:latest < artifact.tar ``` @@ -260,7 +260,7 @@ which tags and pushes an image. This is accomplished using a sequence definition (a collection of post-processors that are treated as as single pipeline, see [Post-Processors](/docs/templates/post-processors.html) for more information): -```json +``` json { "post-processors": [ [ @@ -285,7 +285,7 @@ Going a step further, if you wanted to tag and push an image to multiple container repositories, this could be accomplished by defining two, nearly-identical sequence definitions, as demonstrated by the example below: -```json +``` json { "post-processors": [ [ @@ -317,7 +317,7 @@ Packer can tag and push images for use in processors work as described above and example configuration properties are shown below: -```json +``` json { "post-processors": [ [ @@ -358,11 +358,11 @@ Dockerfiles have some additional features that Packer doesn't support which are able to be worked around. Many of these features will be automated by Packer in the future: -- Dockerfiles will snapshot the container at each step, allowing you to go +- Dockerfiles will snapshot the container at each step, allowing you to go back to any step in the history of building. Packer doesn't do this yet, but inter-step snapshotting is on the way. -- Dockerfiles can contain information such as exposed ports, shared volumes, +- Dockerfiles can contain information such as exposed ports, shared volumes, and other metadata. Packer builds a raw Docker container image that has none of this metadata. You can pass in much of this metadata at runtime with `docker run`. diff --git a/website/source/docs/builders/file.html.md b/website/source/docs/builders/file.html.md index cd2fa39b7..c2fd9db36 100644 --- a/website/source/docs/builders/file.html.md +++ b/website/source/docs/builders/file.html.md @@ -1,11 +1,11 @@ --- +description: | + The file Packer builder is not really a builder, it just creates an artifact + from a file. It can be used to debug post-processors without incurring high + wait times. It does not run any provisioners. layout: docs -sidebar_current: docs-builders-file -page_title: File - Builders -description: |- - The file Packer builder is not really a builder, it just creates an artifact - from a file. It can be used to debug post-processors without incurring high - wait times. It does not run any provisioners. +page_title: 'File - Builders' +sidebar_current: 'docs-builders-file' --- # File Builder @@ -21,7 +21,7 @@ wait times. It does not run any provisioners. Below is a fully functioning example. It doesn't do anything useful, since no provisioners are defined, but it will connect to the specified host via ssh. -```json +``` json { "type": "file", "content": "Lorem ipsum dolor sit amet", @@ -39,7 +39,7 @@ Any [communicator](/docs/templates/communicator.html) defined is ignored. ### Required: -- `target` (string) - The path for a file which will be copied as the +- `target` (string) - The path for a file which will be copied as the artifact. ### Optional: @@ -47,7 +47,7 @@ Any [communicator](/docs/templates/communicator.html) defined is ignored. You can only define one of `source` or `content`. If none of them is defined the artifact will be empty. -- `source` (string) - The path for a file which will be copied as the +- `source` (string) - The path for a file which will be copied as the artifact. -- `content` (string) - The content that will be put into the artifact. +- `content` (string) - The content that will be put into the artifact. diff --git a/website/source/docs/builders/googlecompute.html.md b/website/source/docs/builders/googlecompute.html.md index 8eaaa4552..929fc7705 100644 --- a/website/source/docs/builders/googlecompute.html.md +++ b/website/source/docs/builders/googlecompute.html.md @@ -1,10 +1,10 @@ --- +description: | + The googlecompute Packer builder is able to create images for use with + Google Cloud Compute Engine (GCE) based on existing images. layout: docs -sidebar_current: docs-builders-googlecompute -page_title: Google Compute - Builders -description: |- - The googlecompute Packer builder is able to create images for use with - Google Cloud Compute Engine (GCE) based on existing images. +page_title: 'Google Compute - Builders' +sidebar_current: 'docs-builders-googlecompute' --- # Google Compute Builder @@ -17,6 +17,7 @@ Compute Engine](https://cloud.google.com/products/compute-engine)(GCE) based on existing images. Building GCE images from scratch is not possible from Packer at this time. For building images from scratch, please see [Building GCE Images from Scratch](https://cloud.google.com/compute/docs/tutorials/building-images). + ## Authentication Authenticating with Google Cloud services requires at most one JSON file, called @@ -38,17 +39,17 @@ scopes when launching the instance. For `gcloud`, do this via the `--scopes` parameter: -```shell +``` shell $ gcloud compute --project YOUR_PROJECT instances create "INSTANCE-NAME" ... \ --scopes "https://www.googleapis.com/auth/compute,https://www.googleapis.com/auth/devstorage.full_control" \ ``` For the [Google Developers Console](https://console.developers.google.com): -1. Choose "Show advanced options" -1. Tick "Enable Compute Engine service account" -1. Choose "Read Write" for Compute -1. Chose "Full" for "Storage" +1. Choose "Show advanced options" +2. Tick "Enable Compute Engine service account" +3. Choose "Read Write" for Compute +4. Chose "Full" for "Storage" **The service account will be used automatically by Packer as long as there is no *account file* specified in the Packer configuration file.** @@ -60,50 +61,46 @@ you to create and download a credential file that will let you use the `googlecompute` Packer builder anywhere. To make the process more straightforwarded, it is documented here. -1. Log into the [Google Developers +1. Log into the [Google Developers Console](https://console.developers.google.com) and select a project. -1. Under the "APIs & Auth" section, click "Credentials." +2. Under the "APIs & Auth" section, click "Credentials." -1. Click the "Create new Client ID" button, select "Service account", and click +3. Click the "Create new Client ID" button, select "Service account", and click "Create Client ID" -1. Click "Generate new JSON key" for the Service Account you just created. A +4. Click "Generate new JSON key" for the Service Account you just created. A JSON file will be downloaded automatically. This is your *account file*. ### Precedence of Authentication Methods Packer looks for credentials in the following places, preferring the first location found: -1. A `account_file` option in your packer file. +1. A `account_file` option in your packer file. -1. A JSON file (Service Account) whose path is specified by the `GOOGLE_APPLICATION_CREDENTIALS` environment variable. +2. A JSON file (Service Account) whose path is specified by the `GOOGLE_APPLICATION_CREDENTIALS` environment variable. -1. A JSON file in a location known to the `gcloud` command-line tool. (`gcloud` creates it when it's configured) +3. A JSON file in a location known to the `gcloud` command-line tool. (`gcloud` creates it when it's configured) On Windows, this is: - ``` - %APPDATA%/gcloud/application_default_credentials.json - ``` + %APPDATA%/gcloud/application_default_credentials.json On other systems: - ``` - $HOME/.config/gcloud/application_default_credentials.json - ``` + $HOME/.config/gcloud/application_default_credentials.json -1. On Google Compute Engine and Google App Engine Managed VMs, it fetches credentials from the metadata server. (Needs a correct VM authentication scope configuration, see above) +4. On Google Compute Engine and Google App Engine Managed VMs, it fetches credentials from the metadata server. (Needs a correct VM authentication scope configuration, see above) ## Basic Example Below is a fully functioning example. It doesn't do anything useful, since no provisioners or startup-script metadata are defined, but it will effectively -repackage an existing GCE image. The account_file is obtained in the previous +repackage an existing GCE image. The account\_file is obtained in the previous section. If it parses as JSON it is assumed to be the file itself, otherwise it is assumed to be the path to the file containing the JSON. -```json +``` json { "builders": [ { @@ -155,81 +152,81 @@ builder. ### Required: -- `project_id` (string) - The project ID that will be used to launch instances +- `project_id` (string) - The project ID that will be used to launch instances and store images. -- `source_image` (string) - The source image to use to create the new image +- `source_image` (string) - The source image to use to create the new image from. You can also specify `source_image_family` instead. If both `source_image` and `source_image_family` are specified, `source_image` takes precedence. Example: `"debian-8-jessie-v20161027"` -- `source_image_family` (string) - The source image family to use to create +- `source_image_family` (string) - The source image family to use to create the new image from. The image family always returns its latest image that is not deprecated. Example: `"debian-8"`. -- `zone` (string) - The zone in which to launch the instance used to create +- `zone` (string) - The zone in which to launch the instance used to create the image. Example: `"us-central1-a"` ### Optional: -- `account_file` (string) - The JSON file containing your account credentials. +- `account_file` (string) - The JSON file containing your account credentials. Not required if you run Packer on a GCE instance with a service account. Instructions for creating file or using service accounts are above. -- `address` (string) - The name of a pre-allocated static external IP address. +- `address` (string) - The name of a pre-allocated static external IP address. Note, must be the name and not the actual IP address. -- `disk_name` (string) - The name of the disk, if unset the instance name will be +- `disk_name` (string) - The name of the disk, if unset the instance name will be used. -- `disk_size` (integer) - The size of the disk in GB. This defaults to `10`, +- `disk_size` (integer) - The size of the disk in GB. This defaults to `10`, which is 10GB. -- `disk_type` (string) - Type of disk used to back your instance, like `pd-ssd` or `pd-standard`. Defaults to `pd-standard`. +- `disk_type` (string) - Type of disk used to back your instance, like `pd-ssd` or `pd-standard`. Defaults to `pd-standard`. -- `image_description` (string) - The description of the resulting image. +- `image_description` (string) - The description of the resulting image. -- `image_family` (string) - The name of the image family to which the +- `image_family` (string) - The name of the image family to which the resulting image belongs. You can create disks by specifying an image family instead of a specific image name. The image family always returns its latest image that is not deprecated. -- `image_name` (string) - The unique name of the resulting image. Defaults to +- `image_name` (string) - The unique name of the resulting image. Defaults to `"packer-{{timestamp}}"`. -- `instance_name` (string) - A name to give the launched instance. Beware that +- `instance_name` (string) - A name to give the launched instance. Beware that this must be unique. Defaults to `"packer-{{uuid}}"`. -- `machine_type` (string) - The machine type. Defaults to `"n1-standard-1"`. +- `machine_type` (string) - The machine type. Defaults to `"n1-standard-1"`. -- `metadata` (object of key/value strings) - Metadata applied to the launched +- `metadata` (object of key/value strings) - Metadata applied to the launched instance. -- `network` (string) - The Google Compute network to use for the +- `network` (string) - The Google Compute network to use for the launched instance. Defaults to `"default"`. -- `network_project_id` (string) - The project ID for the network and subnetwork +- `network_project_id` (string) - The project ID for the network and subnetwork to use for launched instance. Defaults to `project_id`. -- `omit_external_ip` (boolean) - If true, the instance will not have an external IP. +- `omit_external_ip` (boolean) - If true, the instance will not have an external IP. `use_internal_ip` must be true if this property is true. -- `on_host_maintenance` (string) - Sets Host Maintenance Option. Valid +- `on_host_maintenance` (string) - Sets Host Maintenance Option. Valid choices are `MIGRATE` and `TERMINATE`. Please see [GCE Instance Scheduling Options](https://cloud.google.com/compute/docs/instances/setting-instance-scheduling-options), - as not all machine_types support `MIGRATE` (i.e. machines with GPUs). + as not all machine\_types support `MIGRATE` (i.e. machines with GPUs). If preemptible is true this can only be `TERMINATE`. If preemptible is false, it defaults to `MIGRATE` -- `preemptible` (boolean) - If true, launch a preembtible instance. +- `preemptible` (boolean) - If true, launch a preembtible instance. -- `region` (string) - The region in which to launch the instance. Defaults to +- `region` (string) - The region in which to launch the instance. Defaults to to the region hosting the specified `zone`. -- `scopes` (array of strings) - The service account scopes for launched instance. +- `scopes` (array of strings) - The service account scopes for launched instance. Defaults to: - ```json + ``` json [ "https://www.googleapis.com/auth/userinfo.email", "https://www.googleapis.com/auth/compute", @@ -237,24 +234,24 @@ builder. ] ``` -- `source_image_project_id` (string) - The project ID of the +- `source_image_project_id` (string) - The project ID of the project containing the source image. -- `startup_script_file` (string) - The filepath to a startup script to run on +- `startup_script_file` (string) - The filepath to a startup script to run on the VM from which the image will be made. -- `state_timeout` (string) - The time to wait for instance state changes. +- `state_timeout` (string) - The time to wait for instance state changes. Defaults to `"5m"`. -- `subnetwork` (string) - The Google Compute subnetwork to use for the launched - instance. Only required if the `network` has been created with custom - subnetting. - Note, the region of the subnetwork must match the `region` or `zone` in - which the VM is launched. +- `subnetwork` (string) - The Google Compute subnetwork to use for the launched + instance. Only required if the `network` has been created with custom + subnetting. + Note, the region of the subnetwork must match the `region` or `zone` in + which the VM is launched. -- `tags` (array of strings) +- `tags` (array of strings) -- `use_internal_ip` (boolean) - If true, use the instance's internal IP +- `use_internal_ip` (boolean) - If true, use the instance's internal IP instead of its external IP during building. ## Startup Scripts @@ -273,10 +270,11 @@ when a startup script fails. ### Windows A Windows startup script can only be provided via the 'windows-startup-script-cmd' instance -creation `metadata` field. The builder will _not_ wait for a Windows startup scripts to +creation `metadata` field. The builder will *not* wait for a Windows startup scripts to terminate. You have to ensure that it finishes before the instance shuts down. ### Logging + Startup script logs can be copied to a Google Cloud Storage (GCS) location specified via the 'startup-script-log-dest' instance creation `metadata` field. The GCS location must be writeable by the credentials provided in the builder config's `account_file`. diff --git a/website/source/docs/builders/hyperv-iso.html.md b/website/source/docs/builders/hyperv-iso.html.md index a795a73b7..c7dee3e89 100644 --- a/website/source/docs/builders/hyperv-iso.html.md +++ b/website/source/docs/builders/hyperv-iso.html.md @@ -1,10 +1,10 @@ --- +description: | + The Hyper-V Packer builder is able to create Hyper-V virtual machines and + export them. layout: docs -sidebar_current: docs-builders-hyperv-iso -page_title: Hyper-V ISO - Builders -description: |- - The Hyper-V Packer builder is able to create Hyper-V virtual machines and - export them. +page_title: 'Hyper-V ISO - Builders' +sidebar_current: 'docs-builders-hyperv-iso' --- # Hyper-V Builder (from an ISO) @@ -25,7 +25,7 @@ Here is a basic example. This example is not functional. It will start the OS installer but then fail because we don't provide the preseed file for Ubuntu to self-install. Still, the example serves to show the basic configuration: -```json +``` json { "type": "hyperv-iso", "iso_url": "http://releases.ubuntu.com/12.04/ubuntu-12.04.5-server-amd64.iso", @@ -53,57 +53,57 @@ can be configured for this builder. ### Required: -- `iso_checksum` (string) - The checksum for the OS ISO file. Because ISO +- `iso_checksum` (string) - The checksum for the OS ISO file. Because ISO files are so large, this is required and Packer will verify it prior to booting a virtual machine with the ISO attached. The type of the checksum is specified with `iso_checksum_type`, documented below. -- `iso_checksum_type` (string) - The type of the checksum specified in +- `iso_checksum_type` (string) - The type of the checksum specified in `iso_checksum`. Valid values are "none", "md5", "sha1", "sha256", or "sha512" currently. While "none" will skip checksumming, this is not recommended since ISO files are generally large and corruption does happen from time to time. -- `iso_url` (string) - A URL to the ISO containing the installation image. +- `iso_url` (string) - A URL to the ISO containing the installation image. This URL can be either an HTTP URL or a file URL (or path to a file). If this is an HTTP URL, Packer will download iso and cache it between runs. ### Optional: -- `boot_command` (array of strings) - This is an array of commands to type +- `boot_command` (array of strings) - This is an array of commands to type when the virtual machine is first booted. The goal of these commands should be to type just enough to initialize the operating system installer. Special keys can be typed as well, and are covered in the section below on the boot command. If this is not specified, it is assumed the installer will start itself. -- `boot_wait` (string) - The time to wait after booting the initial virtual +- `boot_wait` (string) - The time to wait after booting the initial virtual machine before typing the `boot_command`. The value of this should be a duration. Examples are "5s" and "1m30s" which will cause Packer to wait five seconds and one minute 30 seconds, respectively. If this isn't specified, the default is 10 seconds. -- `cpu` (integer) - The number of cpus the virtual machine should use. If this isn't specified, +- `cpu` (integer) - The number of cpus the virtual machine should use. If this isn't specified, the default is 1 cpu. -- `disk_size` (integer) - The size, in megabytes, of the hard disk to create +- `disk_size` (integer) - The size, in megabytes, of the hard disk to create for the VM. By default, this is 40 GB. -- `enable_dynamic_memory` (bool) - If true enable dynamic memory for virtual machine. +- `enable_dynamic_memory` (bool) - If true enable dynamic memory for virtual machine. This defaults to false. -- `enable_mac_spoofing` (bool) - If true enable mac spoofing for virtual machine. +- `enable_mac_spoofing` (bool) - If true enable mac spoofing for virtual machine. This defaults to false. -- `enable_secure_boot` (bool) - If true enable secure boot for virtual machine. +- `enable_secure_boot` (bool) - If true enable secure boot for virtual machine. This defaults to false. -- `enable_virtualization_extensions` (bool) - If true enable virtualization extensions for virtual machine. +- `enable_virtualization_extensions` (bool) - If true enable virtualization extensions for virtual machine. This defaults to false. For nested virtualization you need to enable mac spoofing, disable dynamic memory and have at least 4GB of RAM for virtual machine. -- `floppy_files` (array of strings) - A list of files to place onto a floppy +- `floppy_files` (array of strings) - A list of files to place onto a floppy disk that is attached when the VM is booted. This is most useful for unattended Windows installs, which look for an `Autounattend.xml` file on removable media. By default, no floppy will be attached. All files @@ -113,18 +113,18 @@ can be configured for this builder. characters (`*`, `?`, and `[]`) are allowed. Directory names are also allowed, which will add all the files found in the directory to the floppy. -- `generation` (integer) - The Hyper-V generation for the virtual machine. By +- `generation` (integer) - The Hyper-V generation for the virtual machine. By default, this is 1. Generation 2 Hyper-V virtual machines do not support floppy drives. In this scenario use `secondary_iso_images` instead. Hard drives and dvd drives will also be scsi and not ide. -- `guest_additions_mode` (string) - How should guest additions be installed. +- `guest_additions_mode` (string) - How should guest additions be installed. If value `attach` then attach iso image with by specified by `guest_additions_path`. Otherwise guest additions is not installed. -- `guest_additions_path` (string) - The path to the iso image for guest additions. +- `guest_additions_path` (string) - The path to the iso image for guest additions. -- `http_directory` (string) - Path to a directory to serve using an HTTP +- `http_directory` (string) - Path to a directory to serve using an HTTP server. The files in this directory will be available over HTTP that will be requestable from the virtual machine. This is useful for hosting kickstart files and so on. By default this is "", which means no HTTP @@ -132,68 +132,68 @@ can be configured for this builder. available as variables in `boot_command`. This is covered in more detail below. -- `http_port_min` and `http_port_max` (integer) - These are the minimum and +- `http_port_min` and `http_port_max` (integer) - These are the minimum and maximum port to use for the HTTP server started to serve the `http_directory`. Because Packer often runs in parallel, Packer will choose a randomly available port in this range to run the HTTP server. If you want to force the HTTP server to be on one port, make this minimum and maximum port the same. By default the values are 8000 and 9000, respectively. -- `iso_urls` (array of strings) - Multiple URLs for the ISO to download. +- `iso_urls` (array of strings) - Multiple URLs for the ISO to download. Packer will try these in order. If anything goes wrong attempting to download or while downloading a single URL, it will move on to the next. All URLs must point to the same file (same checksum). By default this is empty and `iso_url` is used. Only one of `iso_url` or `iso_urls` can be specified. -- `iso_target_extension` (string) - The extension of the iso file after +- `iso_target_extension` (string) - The extension of the iso file after download. This defaults to "iso". -- `iso_target_path` (string) - The path where the iso should be saved after +- `iso_target_path` (string) - The path where the iso should be saved after download. By default will go in the packer cache, with a hash of the original filename as its name. -- `output_directory` (string) - This is the path to the directory where the +- `output_directory` (string) - This is the path to the directory where the resulting virtual machine will be created. This may be relative or absolute. If relative, the path is relative to the working directory when `packer` is executed. This directory must not exist or be empty prior to running the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the name of the build. -- `ram_size` (integer) - The size, in megabytes, of the ram to create +- `ram_size` (integer) - The size, in megabytes, of the ram to create for the VM. By default, this is 1 GB. -* `secondary_iso_images` (array of strings) - A list of iso paths to attached to a +- `secondary_iso_images` (array of strings) - A list of iso paths to attached to a VM when it is booted. This is most useful for unattended Windows installs, which look for an `Autounattend.xml` file on removable media. By default, no secondary iso will be attached. -- `shutdown_command` (string) - The command to use to gracefully shut down the machine once all +- `shutdown_command` (string) - The command to use to gracefully shut down the machine once all the provisioning is done. By default this is an empty string, which tells Packer to just forcefully shut down the machine unless a shutdown command takes place inside script so this may safely be omitted. If one or more scripts require a reboot it is suggested to leave this blank since reboots may fail and specify the final shutdown command in your last script. -- `shutdown_timeout` (string) - The amount of time to wait after executing +- `shutdown_timeout` (string) - The amount of time to wait after executing the `shutdown_command` for the virtual machine to actually shut down. If it doesn't shut down in this time, it is an error. By default, the timeout is "5m", or five minutes. -- `skip_compaction` (bool) - If true skip compacting the hard disk for virtual machine when +- `skip_compaction` (bool) - If true skip compacting the hard disk for virtual machine when exporting. This defaults to false. -- `switch_name` (string) - The name of the switch to connect the virtual machine to. Be defaulting +- `switch_name` (string) - The name of the switch to connect the virtual machine to. Be defaulting this to an empty string, Packer will try to determine the switch to use by looking for external switch that is up and running. -- `switch_vlan_id` (string) - This is the vlan of the virtual switch's network card. +- `switch_vlan_id` (string) - This is the vlan of the virtual switch's network card. By default none is set. If none is set then a vlan is not set on the switch's network card. If this value is set it should match the vlan specified in by `vlan_id`. -- `vlan_id` (string) - This is the vlan of the virtual machine's network card for the new virtual +- `vlan_id` (string) - This is the vlan of the virtual machine's network card for the new virtual machine. By default none is set. If none is set then vlans are not set on the virtual machine's network card. -- `vm_name` (string) - This is the name of the virtual machine for the new virtual +- `vm_name` (string) - This is the name of the virtual machine for the new virtual machine, without the file extension. By default this is "packer-BUILDNAME", where "BUILDNAME" is the name of the build. @@ -213,65 +213,65 @@ to the machine, simulating a human actually typing the keyboard. There are a set of special keys available. If these are in your boot command, they will be replaced by the proper key: -- `` - Backspace +- `` - Backspace -- `` - Delete +- `` - Delete -- `` and `` - Simulates an actual "enter" or "return" keypress. +- `` and `` - Simulates an actual "enter" or "return" keypress. -- `` - Simulates pressing the escape key. +- `` - Simulates pressing the escape key. -- `` - Simulates pressing the tab key. +- `` - Simulates pressing the tab key. -- `` - `` - Simulates pressing a function key. +- `` - `` - Simulates pressing a function key. -- `` `` `` `` - Simulates pressing an arrow key. +- `` `` `` `` - Simulates pressing an arrow key. -- `` - Simulates pressing the spacebar. +- `` - Simulates pressing the spacebar. -- `` - Simulates pressing the insert key. +- `` - Simulates pressing the insert key. -- `` `` - Simulates pressing the home and end keys. +- `` `` - Simulates pressing the home and end keys. -- `` `` - Simulates pressing the page up and page down keys. +- `` `` - Simulates pressing the page up and page down keys. -- `` `` - Simulates pressing the alt key. +- `` `` - Simulates pressing the alt key. -- `` `` - Simulates pressing the ctrl key. +- `` `` - Simulates pressing the ctrl key. -- `` `` - Simulates pressing the shift key. +- `` `` - Simulates pressing the shift key. -- `` `` - Simulates pressing and holding the alt key. +- `` `` - Simulates pressing and holding the alt key. -- `` `` - Simulates pressing and holding the ctrl key. +- `` `` - Simulates pressing and holding the ctrl key. -- `` `` - Simulates pressing and holding the shift key. +- `` `` - Simulates pressing and holding the shift key. -- `` `` - Simulates releasing a held alt key. +- `` `` - Simulates releasing a held alt key. -- `` `` - Simulates releasing a held ctrl key. +- `` `` - Simulates releasing a held ctrl key. -- `` `` - Simulates releasing a held shift key. +- `` `` - Simulates releasing a held shift key. -- `` `` `` - Adds a 1, 5 or 10 second pause before +- `` `` `` - Adds a 1, 5 or 10 second pause before sending any additional keys. This is useful if you have to generally wait for the UI to update before typing more. -When using modifier keys `ctrl`, `alt`, `shift` ensure that you release them, otherwise they will be held down until the machine reboots. Use lowercase characters as well inside modifiers. For example: to simulate ctrl+c use `c`. +When using modifier keys `ctrl`, `alt`, `shift` ensure that you release them, otherwise they will be held down until the machine reboots. Use lowercase characters as well inside modifiers. For example: to simulate ctrl+c use `c`. In addition to the special keys, each command to type is treated as a [template engine](/docs/templates/engine.html). The available variables are: -* `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server - that is started serving the directory specified by the `http_directory` - configuration parameter. If `http_directory` isn't specified, these will - be blank! +- `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server + that is started serving the directory specified by the `http_directory` + configuration parameter. If `http_directory` isn't specified, these will + be blank! Example boot command. This is actually a working boot command used to start an Ubuntu 12.04 installer: -```json +``` json [ "", "/install/vmlinuz noapic ", @@ -305,7 +305,7 @@ There is a [windows version of mkisofs](http://opensourcepack.blogspot.co.uk/p/c Example powershell script. This is an actually working powershell script used to create a Windows answer iso: -```powershell +``` powershell $isoFolder = "answer-iso" if (test-path $isoFolder){ remove-item $isoFolder -Force -Recurse @@ -339,12 +339,11 @@ if (test-path $isoFolder){ } ``` - ## Example For Windows Server 2012 R2 Generation 2 Packer config: -```json +``` json { "builders": [ { @@ -402,7 +401,7 @@ Packer config: autounattend.xml: -```xml +``` xml @@ -799,12 +798,11 @@ Finish Setup cache proxy during installation --> - ``` sysprep-unattend.xml: -```text +``` text @@ -873,7 +871,7 @@ a virtual switch with an `External` connection type. ### Packer config: -```json +``` json { "variables": { "vm_name": "ubuntu-xenial", @@ -924,7 +922,7 @@ a virtual switch with an `External` connection type. ### preseed.cfg: -```text +``` text ## Options to set on the command line d-i debian-installer/locale string en_US.utf8 d-i console-setup/ask_detect boolean false diff --git a/website/source/docs/builders/hyperv.html.md b/website/source/docs/builders/hyperv.html.md index 0c21a42ee..d12e2a212 100644 --- a/website/source/docs/builders/hyperv.html.md +++ b/website/source/docs/builders/hyperv.html.md @@ -1,10 +1,10 @@ --- +description: | + The Hyper-V Packer builder is able to create Hyper-V virtual machines and + export them. layout: docs -sidebar_current: docs-builders-hyperv -page_title: Hyper-V - Builders -description: |- - The Hyper-V Packer builder is able to create Hyper-V virtual machines and - export them. +page_title: 'Hyper-V - Builders' +sidebar_current: 'docs-builders-hyperv' --- # HyperV Builder @@ -14,7 +14,7 @@ virtual machines and export them. Packer currently only support building HyperV machines with an iso: -- [hyperv-iso](/docs/builders/hyperv-iso.html) - Starts from - an ISO file, creates a brand new Hyper-V VM, installs an OS, - provisions software within the OS, then exports that machine to create - an image. This is best for people who want to start from scratch. +- [hyperv-iso](/docs/builders/hyperv-iso.html) - Starts from + an ISO file, creates a brand new Hyper-V VM, installs an OS, + provisions software within the OS, then exports that machine to create + an image. This is best for people who want to start from scratch. diff --git a/website/source/docs/builders/index.html.md b/website/source/docs/builders/index.html.md index 5e7fa4f6d..6e18f7b70 100644 --- a/website/source/docs/builders/index.html.md +++ b/website/source/docs/builders/index.html.md @@ -1,10 +1,10 @@ --- +description: | + Builders are responsible for creating machines and generating images from them + for various platforms. layout: docs page_title: Builders -sidebar_current: docs-builders -description: |- - Builders are responsible for creating machines and generating images from them - for various platforms. +sidebar_current: 'docs-builders' --- # Builders diff --git a/website/source/docs/builders/null.html.md b/website/source/docs/builders/null.html.md index b58112baf..eee958c6b 100644 --- a/website/source/docs/builders/null.html.md +++ b/website/source/docs/builders/null.html.md @@ -1,12 +1,12 @@ --- +description: | + The null Packer builder is not really a builder, it just sets up an SSH + connection and runs the provisioners. It can be used to debug provisioners + without incurring high wait times. It does not create any kind of image or + artifact. layout: docs -sidebar_current: docs-builders-null -page_title: Null - Builders -description: |- - The null Packer builder is not really a builder, it just sets up an SSH - connection and runs the provisioners. It can be used to debug provisioners - without incurring high wait times. It does not create any kind of image or - artifact. +page_title: 'Null - Builders' +sidebar_current: 'docs-builders-null' --- # Null Builder @@ -23,7 +23,7 @@ artifact. Below is a fully functioning example. It doesn't do anything useful, since no provisioners are defined, but it will connect to the specified host via ssh. -```json +``` json { "type": "null", "ssh_host": "127.0.0.1", diff --git a/website/source/docs/builders/oneandone.html.md b/website/source/docs/builders/oneandone.html.md index 3947ae412..16da7b6d3 100644 --- a/website/source/docs/builders/oneandone.html.md +++ b/website/source/docs/builders/oneandone.html.md @@ -1,9 +1,8 @@ --- +description: 'The 1&1 builder is able to create images for 1&1 cloud.' layout: docs -sidebar_current: docs-builders-oneandone -page_title: 1&1 - Builders -description: |- - The 1&1 builder is able to create images for 1&1 cloud. +page_title: '1&1 - Builders' +sidebar_current: 'docs-builders-oneandone' --- # 1&1 Builder @@ -24,28 +23,27 @@ builder. ### Required -- `source_image_name` (string) - 1&1 Server Appliance name of type `IMAGE`. +- `source_image_name` (string) - 1&1 Server Appliance name of type `IMAGE`. -- `token` (string) - 1&1 REST API Token. This can be specified via environment variable `ONEANDONE_TOKEN` +- `token` (string) - 1&1 REST API Token. This can be specified via environment variable `ONEANDONE_TOKEN` ### Optional -- `data_center_name` - Name of virtual data center. Possible values "ES", "US", "GB", "DE". Default value "US" +- `data_center_name` - Name of virtual data center. Possible values "ES", "US", "GB", "DE". Default value "US" -- `disk_size` (string) - Amount of disk space for this image in GB. Defaults to "50" +- `disk_size` (string) - Amount of disk space for this image in GB. Defaults to "50" -- `image_name` (string) - Resulting image. If "image_name" is not provided Packer will generate it +- `image_name` (string) - Resulting image. If "image\_name" is not provided Packer will generate it -- `retries` (int) - Number of retries Packer will make status requests while waiting for the build to complete. Default value "600". - -- `url` (string) - Endpoint for the 1&1 REST API. Default URL "https://cloudpanel-api.1and1.com/v1" +- `retries` (int) - Number of retries Packer will make status requests while waiting for the build to complete. Default value "600". +- `url` (string) - Endpoint for the 1&1 REST API. Default URL "" ## Example Here is a basic example: -```json +``` json { "builders":[ { diff --git a/website/source/docs/builders/openstack.html.md b/website/source/docs/builders/openstack.html.md index f042921f5..5406f90ed 100644 --- a/website/source/docs/builders/openstack.html.md +++ b/website/source/docs/builders/openstack.html.md @@ -1,13 +1,13 @@ --- +description: | + The openstack Packer builder is able to create new images for use with + OpenStack. The builder takes a source image, runs any provisioning necessary + on the image after launching it, then creates a new reusable image. This + reusable image can then be used as the foundation of new servers that are + launched within OpenStack. layout: docs -sidebar_current: docs-builders-openstack -page_title: OpenStack - Builders -description: |- - The openstack Packer builder is able to create new images for use with - OpenStack. The builder takes a source image, runs any provisioning necessary - on the image after launching it, then creates a new reusable image. This - reusable image can then be used as the foundation of new servers that are - launched within OpenStack. +page_title: 'OpenStack - Builders' +sidebar_current: 'docs-builders-openstack' --- # OpenStack Builder @@ -25,9 +25,9 @@ created. This simplifies configuration quite a bit. The builder does *not* manage images. Once it creates an image, it is up to you to use it or delete it. -~> **OpenStack Liberty or later requires OpenSSL!** To use the OpenStack +~> **OpenStack Liberty or later requires OpenSSL!** To use the OpenStack builder with OpenStack Liberty (Oct 2015) or later you need to have OpenSSL -installed _if you are using temporary key pairs_, i.e. don't use +installed *if you are using temporary key pairs*, i.e. don't use [`ssh_keypair_name`](openstack.html#ssh_keypair_name) nor [`ssh_password`](/docs/templates/communicator.html#ssh_password). All major OS'es have OpenSSL installed by default except Windows. @@ -44,119 +44,119 @@ builder. ### Required: -- `flavor` (string) - The ID, name, or full URL for the desired flavor for the +- `flavor` (string) - The ID, name, or full URL for the desired flavor for the server to be created. -- `image_name` (string) - The name of the resulting image. +- `image_name` (string) - The name of the resulting image. -- `identity_endpoint` (string) - The URL to the OpenStack Identity service. +- `identity_endpoint` (string) - The URL to the OpenStack Identity service. If not specified, Packer will use the environment variables `OS_AUTH_URL`, if set. -- `source_image` (string) - The ID or full URL to the base image to use. This +- `source_image` (string) - The ID or full URL to the base image to use. This is the image that will be used to launch a new server and provision it. Unless you specify completely custom SSH settings, the source image must have `cloud-init` installed so that the keypair gets assigned properly. -- `source_image_name` (string) - The name of the base image to use. This +- `source_image_name` (string) - The name of the base image to use. This is an alternative way of providing `source_image` and only either of them can be specified. -- `username` or `user_id` (string) - The username or id used to connect to +- `username` or `user_id` (string) - The username or id used to connect to the OpenStack service. If not specified, Packer will use the environment variable `OS_USERNAME` or `OS_USERID`, if set. -- `password` (string) - The password used to connect to the OpenStack service. +- `password` (string) - The password used to connect to the OpenStack service. If not specified, Packer will use the environment variables `OS_PASSWORD`, if set. ### Optional: -- `availability_zone` (string) - The availability zone to launch the +- `availability_zone` (string) - The availability zone to launch the server in. If this isn't specified, the default enforced by your OpenStack cluster will be used. This may be required for some OpenStack clusters. -- `cacert` (string) - Custom CA certificate file path. - If ommited the OS_CACERT environment variable can be used. +- `cacert` (string) - Custom CA certificate file path. + If ommited the OS\_CACERT environment variable can be used. -- `config_drive` (boolean) - Whether or not nova should use ConfigDrive for - cloud-init metadata. +- `config_drive` (boolean) - Whether or not nova should use ConfigDrive for + cloud-init metadata. -- `cert` (string) - Client certificate file path for SSL client authentication. - If omitted the OS_CERT environment variable can be used. +- `cert` (string) - Client certificate file path for SSL client authentication. + If omitted the OS\_CERT environment variable can be used. -- `domain_name` or `domain_id` (string) - The Domain name or ID you are +- `domain_name` or `domain_id` (string) - The Domain name or ID you are authenticating with. OpenStack installations require this if identity v3 is used. Packer will use the environment variable `OS_DOMAIN_NAME` or `OS_DOMAIN_ID`, if set. -- `endpoint_type` (string) - The endpoint type to use. Can be any of "internal", +- `endpoint_type` (string) - The endpoint type to use. Can be any of "internal", "internalURL", "admin", "adminURL", "public", and "publicURL". By default this is "public". -- `floating_ip` (string) - A specific floating IP to assign to this instance. +- `floating_ip` (string) - A specific floating IP to assign to this instance. -- `floating_ip_pool` (string) - The name of the floating IP pool to use to +- `floating_ip_pool` (string) - The name of the floating IP pool to use to allocate a floating IP. -- `image_members` (array of strings) - List of members to add to the image +- `image_members` (array of strings) - List of members to add to the image after creation. An image member is usually a project (also called the “tenant”) with whom the image is shared. -- `image_visibility` (string) - One of "public", "private", "shared", or +- `image_visibility` (string) - One of "public", "private", "shared", or "community". -- `insecure` (boolean) - Whether or not the connection to OpenStack can be +- `insecure` (boolean) - Whether or not the connection to OpenStack can be done over an insecure connection. By default this is false. -- `key` (string) - Client private key file path for SSL client authentication. - If ommited the OS_KEY environment variable can be used. +- `key` (string) - Client private key file path for SSL client authentication. + If ommited the OS\_KEY environment variable can be used. -- `metadata` (object of key/value strings) - Glance metadata that will be +- `metadata` (object of key/value strings) - Glance metadata that will be applied to the image. -- `instance_metadata` (object of key/value strings) - Metadata that is +- `instance_metadata` (object of key/value strings) - Metadata that is applied to the server instance created by Packer. Also called server properties in some documentation. The strings have a max size of 255 bytes each. -- `networks` (array of strings) - A list of networks by UUID to attach to +- `networks` (array of strings) - A list of networks by UUID to attach to this instance. -- `rackconnect_wait` (boolean) - For rackspace, whether or not to wait for +- `rackconnect_wait` (boolean) - For rackspace, whether or not to wait for Rackconnect to assign the machine an IP address before connecting via SSH. Defaults to false. -- `region` (string) - The name of the region, such as "DFW", in which to +- `region` (string) - The name of the region, such as "DFW", in which to launch the server to create the AMI. If not specified, Packer will use the environment variable `OS_REGION_NAME`, if set. -- `reuse_ips` (boolean) - Whether or not to attempt to reuse existing +- `reuse_ips` (boolean) - Whether or not to attempt to reuse existing unassigned floating ips in the project before allocating a new one. Note that it is not possible to safely do this concurrently, so if you are running multiple openstack builds concurrently, or if other processes are assigning and using floating IPs in the same openstack project while packer is running, you should not set this to true. Defaults to false. -- `security_groups` (array of strings) - A list of security groups by name to +- `security_groups` (array of strings) - A list of security groups by name to add to this instance. -- `ssh_interface` (string) - The type of interface to connect via SSH. Values +- `ssh_interface` (string) - The type of interface to connect via SSH. Values useful for Rackspace are "public" or "private", and the default behavior is to connect via whichever is returned first from the OpenStack API. -- `ssh_ip_version` (string) - The IP version to use for SSH connections, valid +- `ssh_ip_version` (string) - The IP version to use for SSH connections, valid values are `4` and `6`. Useful on dual stacked instances where the default behavior is to connect via whichever IP address is returned first from the OpenStack API. -- `ssh_keypair_name` (string) - If specified, this is the key that will be +- `ssh_keypair_name` (string) - If specified, this is the key that will be used for SSH with the machine. By default, this is blank, and Packer will generate a temporary keypair. [`ssh_password`](/docs/templates/communicator.html#ssh_password) is used. [`ssh_private_key_file`](/docs/templates/communicator.html#ssh_private_key_file) or `ssh_agent_auth` must be specified when `ssh_keypair_name` is utilized. -- `ssh_agent_auth` (boolean) - If true, the local SSH agent will be used to +- `ssh_agent_auth` (boolean) - If true, the local SSH agent will be used to authenticate connections to the source instance. No temporary keypair will be created, and the values of `ssh_password` and `ssh_private_key_file` will be ignored. To use this option with a key pair already configured in the source @@ -164,30 +164,30 @@ builder. with the source instance, set the `ssh_keypair_name` field to the name of the key pair. -- `temporary_key_pair_name` (string) - The name of the temporary key pair +- `temporary_key_pair_name` (string) - The name of the temporary key pair to generate. By default, Packer generates a name that looks like - `packer_`, where \ is a 36 character unique identifier. + `packer_`, where <UUID> is a 36 character unique identifier. -- `tenant_id` or `tenant_name` (string) - The tenant ID or name to boot the +- `tenant_id` or `tenant_name` (string) - The tenant ID or name to boot the instance into. Some OpenStack installations require this. If not specified, Packer will use the environment variable `OS_TENANT_NAME`, if set. Tenant is also called Project in later versions of OpenStack. -- `use_floating_ip` (boolean) - _Deprecated_ use `floating_ip` or `floating_ip_pool` +- `use_floating_ip` (boolean) - *Deprecated* use `floating_ip` or `floating_ip_pool` instead. -- `user_data` (string) - User data to apply when launching the instance. Note +- `user_data` (string) - User data to apply when launching the instance. Note that you need to be careful about escaping characters due to the templates being JSON. It is often more convenient to use `user_data_file`, instead. -- `user_data_file` (string) - Path to a file that will be used for the user +- `user_data_file` (string) - Path to a file that will be used for the user data when launching the instance. ## Basic Example: DevStack Here is a basic example. This is a example to build on DevStack running in a VM. -```json +``` json { "type": "openstack", "identity_endpoint": "http://:5000/v3", @@ -202,7 +202,6 @@ Here is a basic example. This is a example to build on DevStack running in a VM. "flavor": "m1.tiny", "insecure": "true" } - ``` ## Basic Example: Rackspace public cloud @@ -210,7 +209,7 @@ Here is a basic example. This is a example to build on DevStack running in a VM. Here is a basic example. This is a working example to build a Ubuntu 12.04 LTS (Precise Pangolin) on Rackspace OpenStack cloud offering. -```json +``` json { "type": "openstack", "username": "foo", @@ -228,7 +227,7 @@ Here is a basic example. This is a working example to build a Ubuntu 12.04 LTS This example builds an Ubuntu 14.04 image on a private OpenStack cloud, powered by Metacloud. -```json +``` json { "type": "openstack", "ssh_username": "root", @@ -243,27 +242,27 @@ appear in the template. That is because I source a standard OpenStack script with environment variables set before I run this. This script is setting environment variables like: -- `OS_AUTH_URL` -- `OS_TENANT_ID` -- `OS_USERNAME` -- `OS_PASSWORD` +- `OS_AUTH_URL` +- `OS_TENANT_ID` +- `OS_USERNAME` +- `OS_PASSWORD` This is slightly different when identity v3 is used: -- `OS_AUTH_URL` -- `OS_USERNAME` -- `OS_PASSWORD` -- `OS_DOMAIN_NAME` -- `OS_TENANT_NAME` +- `OS_AUTH_URL` +- `OS_USERNAME` +- `OS_PASSWORD` +- `OS_DOMAIN_NAME` +- `OS_TENANT_NAME` This will authenticate the user on the domain and scope you to the project. A tenant is the same as a project. It's optional to use names or IDs in v3. -This means you can use `OS_USERNAME` or `OS_USERID`, `OS_TENANT_ID` or +This means you can use `OS_USERNAME` or `OS_USERID`, `OS_TENANT_ID` or `OS_TENANT_NAME` and `OS_DOMAIN_ID` or `OS_DOMAIN_NAME`. The above example would be equivalent to an RC file looking like this : -```shell +``` shell export OS_AUTH_URL="https://identity.myprovider/v3" export OS_USERNAME="myuser" export OS_PASSWORD="password" @@ -274,18 +273,15 @@ export OS_PROJECT_DOMAIN_NAME="mydomain" ## Notes on OpenStack Authorization The simplest way to get all settings for authorization agains OpenStack is to -go into the OpenStack Dashboard (Horizon) select your _Project_ and navigate -_Project, Access & Security_, select _API Access_ and _Download OpenStack RC -File v3_. Source the file, and select your wanted region by setting -environment variable `OS_REGION_NAME` or `OS_REGION_ID` and `export -OS_TENANT_NAME=$OS_PROJECT_NAME` or `export OS_TENANT_ID=$OS_PROJECT_ID`. +go into the OpenStack Dashboard (Horizon) select your *Project* and navigate +*Project, Access & Security*, select *API Access* and *Download OpenStack RC +File v3*. Source the file, and select your wanted region by setting +environment variable `OS_REGION_NAME` or `OS_REGION_ID` and `export OS_TENANT_NAME=$OS_PROJECT_NAME` or `export OS_TENANT_ID=$OS_PROJECT_ID`. -~> `OS_TENANT_NAME` or `OS_TENANT_ID` must be used even with Identity v3, +~> `OS_TENANT_NAME` or `OS_TENANT_ID` must be used even with Identity v3, `OS_PROJECT_NAME` and `OS_PROJECT_ID` has no effect in Packer. To troubleshoot authorization issues test you environment variables with the OpenStack cli. It can be installed with -``` -$ pip install --user python-openstackclient -``` + $ pip install --user python-openstackclient diff --git a/website/source/docs/builders/parallels-iso.html.md b/website/source/docs/builders/parallels-iso.html.md index 33d676084..14a484c97 100644 --- a/website/source/docs/builders/parallels-iso.html.md +++ b/website/source/docs/builders/parallels-iso.html.md @@ -1,11 +1,11 @@ --- +description: | + The Parallels Packer builder is able to create Parallels Desktop for Mac + virtual machines and export them in the PVM format, starting from an ISO + image. layout: docs -sidebar_current: docs-builders-parallels-iso -page_title: Parallels ISO - Builders -description: |- - The Parallels Packer builder is able to create Parallels Desktop for Mac - virtual machines and export them in the PVM format, starting from an ISO - image. +page_title: 'Parallels ISO - Builders' +sidebar_current: 'docs-builders-parallels-iso' --- # Parallels Builder (from an ISO) @@ -27,7 +27,7 @@ Here is a basic example. This example is not functional. It will start the OS installer but then fail because we don't provide the preseed file for Ubuntu to self-install. Still, the example serves to show the basic configuration: -```json +``` json { "type": "parallels-iso", "guest_os_type": "ubuntu", @@ -58,56 +58,55 @@ builder. ### Required: -- `iso_checksum` (string) - The checksum for the OS ISO file. Because ISO +- `iso_checksum` (string) - The checksum for the OS ISO file. Because ISO files are so large, this is required and Packer will verify it prior to booting a virtual machine with the ISO attached. The type of the checksum is specified with `iso_checksum_type`, documented below. At least one of `iso_checksum` and `iso_checksum_url` must be defined. This has precedence over `iso_checksum_url` type. -- `iso_checksum_type` (string) - The type of the checksum specified in +- `iso_checksum_type` (string) - The type of the checksum specified in `iso_checksum`. Valid values are "none", "md5", "sha1", "sha256", or "sha512" currently. While "none" will skip checksumming, this is not recommended since ISO files are generally large and corruption does happen from time to time. -- `iso_checksum_url` (string) - A URL to a GNU or BSD style checksum file +- `iso_checksum_url` (string) - A URL to a GNU or BSD style checksum file containing a checksum for the OS ISO file. At least one of `iso_checksum` and `iso_checksum_url` must be defined. This will be ignored if `iso_checksum` is non empty. -- `iso_url` (string) - A URL to the ISO containing the installation image. +- `iso_url` (string) - A URL to the ISO containing the installation image. This URL can be either an HTTP URL or a file URL (or path to a file). If this is an HTTP URL, Packer will download it and cache it between runs. -- `parallels_tools_flavor` (string) - The flavor of the Parallels Tools ISO to +- `parallels_tools_flavor` (string) - The flavor of the Parallels Tools ISO to install into the VM. Valid values are "win", "lin", "mac", "os2" and "other". This can be omitted only if `parallels_tools_mode` is "disable". -- `ssh_username` (string) - The username to use to SSH into the machine once +- `ssh_username` (string) - The username to use to SSH into the machine once the OS is installed. - ### Optional: -- `boot_command` (array of strings) - This is an array of commands to type +- `boot_command` (array of strings) - This is an array of commands to type when the virtual machine is first booted. The goal of these commands should be to type just enough to initialize the operating system installer. Special keys can be typed as well, and are covered in the section below on the boot command. If this is not specified, it is assumed the installer will start itself. -- `boot_wait` (string) - The time to wait after booting the initial virtual +- `boot_wait` (string) - The time to wait after booting the initial virtual machine before typing the `boot_command`. The value of this should be a duration. Examples are "5s" and "1m30s" which will cause Packer to wait five seconds and one minute 30 seconds, respectively. If this isn't specified, the default is 10 seconds. -- `disk_size` (integer) - The size, in megabytes, of the hard disk to create +- `disk_size` (integer) - The size, in megabytes, of the hard disk to create for the VM. By default, this is 40000 (about 40 GB). -- `disk_type` (string) - The type for image file based virtual disk drives, +- `disk_type` (string) - The type for image file based virtual disk drives, defaults to `expand`. Valid options are `expand` (expanding disk) that the image file is small initially and grows in size as you add data to it, and `plain` (plain disk) that the image file has a fixed size from the moment it @@ -115,7 +114,7 @@ builder. perform faster than expanding disks. `skip_compaction` will be set to true automatically for plain disks. -- `floppy_files` (array of strings) - A list of files to place onto a floppy +- `floppy_files` (array of strings) - A list of files to place onto a floppy disk that is attached when the VM is booted. This is most useful for unattended Windows installs, which look for an `Autounattend.xml` file on removable media. By default, no floppy will be attached. All files listed in @@ -125,64 +124,64 @@ builder. and \[\]) are allowed. Directory names are also allowed, which will add all the files found in the directory to the floppy. -- `floppy_dirs` (array of strings) - A list of directories to place onto +- `floppy_dirs` (array of strings) - A list of directories to place onto the floppy disk recursively. This is similar to the `floppy_files` option except that the directory structure is preserved. This is useful for when your floppy disk includes drivers or if you just want to organize it's contents as a hierarchy. Wildcard characters (\*, ?, and \[\]) are allowed. -- `guest_os_type` (string) - The guest OS type being installed. By default +- `guest_os_type` (string) - The guest OS type being installed. By default this is "other", but you can get *dramatic* performance improvements by setting this to the proper value. To view all available values for this run `prlctl create x --distribution list`. Setting the correct value hints to Parallels Desktop how to optimize the virtual hardware to work best with that operating system. -- `hard_drive_interface` (string) - The type of controller that the hard +- `hard_drive_interface` (string) - The type of controller that the hard drives are attached to, defaults to "sata". Valid options are "sata", "ide", and "scsi". -- `host_interfaces` (array of strings) - A list of which interfaces on the +- `host_interfaces` (array of strings) - A list of which interfaces on the host should be searched for a IP address. The first IP address found on one of these will be used as `{{ .HTTPIP }}` in the `boot_command`. Defaults to \["en0", "en1", "en2", "en3", "en4", "en5", "en6", "en7", "en8", "en9", "ppp0", "ppp1", "ppp2"\]. -- `http_directory` (string) - Path to a directory to serve using an +- `http_directory` (string) - Path to a directory to serve using an HTTP server. The files in this directory will be available over HTTP that will be requestable from the virtual machine. This is useful for hosting kickstart files and so on. By default this is "", which means no HTTP server will be started. The address and port of the HTTP server will be available as variables in `boot_command`. This is covered in more detail below. -- `http_port_min` and `http_port_max` (integer) - These are the minimum and +- `http_port_min` and `http_port_max` (integer) - These are the minimum and maximum port to use for the HTTP server started to serve the `http_directory`. Because Packer often runs in parallel, Packer will choose a randomly available port in this range to run the HTTP server. If you want to force the HTTP server to be on one port, make this minimum and maximum port the same. By default the values are 8000 and 9000, respectively. -- `iso_target_extension` (string) - The extension of the iso file after +- `iso_target_extension` (string) - The extension of the iso file after download. This defaults to "iso". -- `iso_target_path` (string) - The path where the iso should be saved after +- `iso_target_path` (string) - The path where the iso should be saved after download. By default will go in the packer cache, with a hash of the original filename as its name. -- `iso_urls` (array of strings) - Multiple URLs for the ISO to download. +- `iso_urls` (array of strings) - Multiple URLs for the ISO to download. Packer will try these in order. If anything goes wrong attempting to download or while downloading a single URL, it will move on to the next. All URLs must point to the same file (same checksum). By default this is empty and `iso_url` is used. Only one of `iso_url` or `iso_urls` can be specified. -- `output_directory` (string) - This is the path to the directory where the +- `output_directory` (string) - This is the path to the directory where the resulting virtual machine will be created. This may be relative or absolute. If relative, the path is relative to the working directory when `packer` is executed. This directory must not exist or be empty prior to running the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the name of the build. -- `parallels_tools_guest_path` (string) - The path in the virtual machine to +- `parallels_tools_guest_path` (string) - The path in the virtual machine to upload Parallels Tools. This only takes effect if `parallels_tools_mode` is "upload". This is a [configuration template](/docs/templates/engine.html) that has a single @@ -190,14 +189,14 @@ builder. `parallels_tools_flavor`. By default this is "prl-tools-{{.Flavor}}.iso" which should upload into the login directory of the user. -- `parallels_tools_mode` (string) - The method by which Parallels Tools are +- `parallels_tools_mode` (string) - The method by which Parallels Tools are made available to the guest for installation. Valid options are "upload", "attach", or "disable". If the mode is "attach" the Parallels Tools ISO will be attached as a CD device to the virtual machine. If the mode is "upload" the Parallels Tools ISO will be uploaded to the path specified by `parallels_tools_guest_path`. The default value is "upload". -- `prlctl` (array of array of strings) - Custom `prlctl` commands to execute +- `prlctl` (array of array of strings) - Custom `prlctl` commands to execute in order to further customize the virtual machine being created. The value of this is an array of commands to execute. The commands are executed in the order defined in the template. For each command, the command is defined @@ -208,32 +207,32 @@ builder. variable is replaced with the VM name. More details on how to use `prlctl` are below. -- `prlctl_post` (array of array of strings) - Identical to `prlctl`, except +- `prlctl_post` (array of array of strings) - Identical to `prlctl`, except that it is run after the virtual machine is shutdown, and before the virtual machine is exported. -- `prlctl_version_file` (string) - The path within the virtual machine to +- `prlctl_version_file` (string) - The path within the virtual machine to upload a file that contains the `prlctl` version that was used to create the machine. This information can be useful for provisioning. By default this is ".prlctl\_version", which will generally upload it into the home directory. -- `shutdown_command` (string) - The command to use to gracefully shut down the +- `shutdown_command` (string) - The command to use to gracefully shut down the machine once all the provisioning is done. By default this is an empty string, which tells Packer to just forcefully shut down the machine. -- `shutdown_timeout` (string) - The amount of time to wait after executing the +- `shutdown_timeout` (string) - The amount of time to wait after executing the `shutdown_command` for the virtual machine to actually shut down. If it doesn't shut down in this time, it is an error. By default, the timeout is "5m", or five minutes. -- `skip_compaction` (boolean) - Virtual disk image is compacted at the end of +- `skip_compaction` (boolean) - Virtual disk image is compacted at the end of the build process using `prl_disk_tool` utility (except for the case that `disk_type` is set to `plain`). In certain rare cases, this might corrupt the resulting disk image. If you find this to be the case, you can disable compaction using this configuration value. -- `vm_name` (string) - This is the name of the PVM directory for the new +- `vm_name` (string) - This is the name of the PVM directory for the new virtual machine, without the file extension. By default this is "packer-BUILDNAME", where "BUILDNAME" is the name of the build. @@ -254,47 +253,47 @@ simulating a human actually typing the keyboard. There are a set of special keys available. If these are in your boot command, they will be replaced by the proper key: -- `` - Backspace +- `` - Backspace -- `` - Delete +- `` - Delete -- `` and `` - Simulates an actual "enter" or "return" keypress. +- `` and `` - Simulates an actual "enter" or "return" keypress. -- `` - Simulates pressing the escape key. +- `` - Simulates pressing the escape key. -- `` - Simulates pressing the tab key. +- `` - Simulates pressing the tab key. -- `` - `` - Simulates pressing a function key. +- `` - `` - Simulates pressing a function key. -- `` `` `` `` - Simulates pressing an arrow key. +- `` `` `` `` - Simulates pressing an arrow key. -- `` - Simulates pressing the spacebar. +- `` - Simulates pressing the spacebar. -- `` - Simulates pressing the insert key. +- `` - Simulates pressing the insert key. -- `` `` - Simulates pressing the home and end keys. +- `` `` - Simulates pressing the home and end keys. -- `` `` - Simulates pressing the page up and page down keys. +- `` `` - Simulates pressing the page up and page down keys. -- `` `` - Simulates pressing the alt key. +- `` `` - Simulates pressing the alt key. -- `` `` - Simulates pressing the ctrl key. +- `` `` - Simulates pressing the ctrl key. -- `` `` - Simulates pressing the shift key. +- `` `` - Simulates pressing the shift key. -- `` `` - Simulates pressing and holding the alt key. +- `` `` - Simulates pressing and holding the alt key. -- `` `` - Simulates pressing and holding the ctrl key. +- `` `` - Simulates pressing and holding the ctrl key. -- `` `` - Simulates pressing and holding the shift key. +- `` `` - Simulates pressing and holding the shift key. -- `` `` - Simulates releasing a held alt key. +- `` `` - Simulates releasing a held alt key. -- `` `` - Simulates releasing a held ctrl key. +- `` `` - Simulates releasing a held ctrl key. -- `` `` - Simulates releasing a held shift key. +- `` `` - Simulates releasing a held shift key. -- `` `` `` - Adds a 1, 5 or 10 second pause before +- `` `` `` - Adds a 1, 5 or 10 second pause before sending any additional keys. This is useful if you have to generally wait for the UI to update before typing more. @@ -308,7 +307,7 @@ In addition to the special keys, each command to type is treated as a [template engine](/docs/templates/engine.html). The available variables are: -- `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server +- `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server that is started serving the directory specified by the `http_directory` configuration parameter. If `http_directory` isn't specified, these will be blank! @@ -316,7 +315,7 @@ available variables are: Example boot command. This is actually a working boot command used to start an Ubuntu 12.04 installer: -```text +``` text [ "", "/install/vmlinuz noapic ", @@ -342,7 +341,7 @@ Extra `prlctl` commands are defined in the template in the `prlctl` section. An example is shown below that sets the memory and number of CPUs within the virtual machine: -```json +``` json { "prlctl": [ ["set", "{{.Name}}", "--memsize", "1024"], diff --git a/website/source/docs/builders/parallels-pvm.html.md b/website/source/docs/builders/parallels-pvm.html.md index 2ee0aaf51..6a540b5b3 100644 --- a/website/source/docs/builders/parallels-pvm.html.md +++ b/website/source/docs/builders/parallels-pvm.html.md @@ -1,11 +1,11 @@ --- +description: | + This Parallels builder is able to create Parallels Desktop for Mac virtual + machines and export them in the PVM format, starting from an existing PVM + (exported virtual machine image). layout: docs -sidebar_current: docs-builders-parallels-pvm -page_title: Parallels PVM - Builders -description: |- - This Parallels builder is able to create Parallels Desktop for Mac virtual - machines and export them in the PVM format, starting from an existing PVM - (exported virtual machine image). +page_title: 'Parallels PVM - Builders' +sidebar_current: 'docs-builders-parallels-pvm' --- # Parallels Builder (from a PVM) @@ -26,7 +26,7 @@ create the image. The imported machine is deleted prior to finishing the build. Here is a basic example. This example is functional if you have an PVM matching the settings here. -```json +``` json { "type": "parallels-pvm", "parallels_tools_flavor": "lin", @@ -54,33 +54,33 @@ builder. ### Required: -- `parallels_tools_flavor` (string) - The flavor of the Parallels Tools ISO to +- `parallels_tools_flavor` (string) - The flavor of the Parallels Tools ISO to install into the VM. Valid values are "win", "lin", "mac", "os2" and "other". This can be omitted only if `parallels_tools_mode` is "disable". -- `source_path` (string) - The path to a PVM directory that acts as the source +- `source_path` (string) - The path to a PVM directory that acts as the source of this build. -- `ssh_username` (string) - The username to use to SSH into the machine once +- `ssh_username` (string) - The username to use to SSH into the machine once the OS is installed. ### Optional: -- `boot_command` (array of strings) - This is an array of commands to type +- `boot_command` (array of strings) - This is an array of commands to type when the virtual machine is first booted. The goal of these commands should be to type just enough to initialize the operating system installer. Special keys can be typed as well, and are covered in the section below on the boot command. If this is not specified, it is assumed the installer will start itself. -- `boot_wait` (string) - The time to wait after booting the initial virtual +- `boot_wait` (string) - The time to wait after booting the initial virtual machine before typing the `boot_command`. The value of this should be a duration. Examples are "5s" and "1m30s" which will cause Packer to wait five seconds and one minute 30 seconds, respectively. If this isn't specified, the default is 10 seconds. -- `floppy_files` (array of strings) - A list of files to place onto a floppy +- `floppy_files` (array of strings) - A list of files to place onto a floppy disk that is attached when the VM is booted. This is most useful for unattended Windows installs, which look for an `Autounattend.xml` file on removable media. By default, no floppy will be attached. All files listed in @@ -90,20 +90,20 @@ builder. and \[\]) are allowed. Directory names are also allowed, which will add all the files found in the directory to the floppy. -- `floppy_dirs` (array of strings) - A list of directories to place onto +- `floppy_dirs` (array of strings) - A list of directories to place onto the floppy disk recursively. This is similar to the `floppy_files` option except that the directory structure is preserved. This is useful for when your floppy disk includes drivers or if you just want to organize it's contents as a hierarchy. Wildcard characters (\*, ?, and \[\]) are allowed. -- `output_directory` (string) - This is the path to the directory where the +- `output_directory` (string) - This is the path to the directory where the resulting virtual machine will be created. This may be relative or absolute. If relative, the path is relative to the working directory when `packer` is executed. This directory must not exist or be empty prior to running the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the name of the build. -- `parallels_tools_guest_path` (string) - The path in the VM to upload +- `parallels_tools_guest_path` (string) - The path in the VM to upload Parallels Tools. This only takes effect if `parallels_tools_mode` is "upload". This is a [configuration template](/docs/templates/engine.html) that has a single @@ -111,14 +111,14 @@ builder. `parallels_tools_flavor`. By default this is "prl-tools-{{.Flavor}}.iso" which should upload into the login directory of the user. -- `parallels_tools_mode` (string) - The method by which Parallels Tools are +- `parallels_tools_mode` (string) - The method by which Parallels Tools are made available to the guest for installation. Valid options are "upload", "attach", or "disable". If the mode is "attach" the Parallels Tools ISO will be attached as a CD device to the virtual machine. If the mode is "upload" the Parallels Tools ISO will be uploaded to the path specified by `parallels_tools_guest_path`. The default value is "upload". -- `prlctl` (array of array of strings) - Custom `prlctl` commands to execute +- `prlctl` (array of array of strings) - Custom `prlctl` commands to execute in order to further customize the virtual machine being created. The value of this is an array of commands to execute. The commands are executed in the order defined in the template. For each command, the command is defined @@ -129,35 +129,35 @@ builder. variable is replaced with the VM name. More details on how to use `prlctl` are below. -- `prlctl_post` (array of array of strings) - Identical to `prlctl`, except +- `prlctl_post` (array of array of strings) - Identical to `prlctl`, except that it is run after the virtual machine is shutdown, and before the virtual machine is exported. -- `prlctl_version_file` (string) - The path within the virtual machine to +- `prlctl_version_file` (string) - The path within the virtual machine to upload a file that contains the `prlctl` version that was used to create the machine. This information can be useful for provisioning. By default this is ".prlctl\_version", which will generally upload it into the home directory. -- `reassign_mac` (boolean) - If this is "false" the MAC address of the first +- `reassign_mac` (boolean) - If this is "false" the MAC address of the first NIC will reused when imported else a new MAC address will be generated by Parallels. Defaults to "false". -- `shutdown_command` (string) - The command to use to gracefully shut down the +- `shutdown_command` (string) - The command to use to gracefully shut down the machine once all the provisioning is done. By default this is an empty string, which tells Packer to just forcefully shut down the machine. -- `shutdown_timeout` (string) - The amount of time to wait after executing the +- `shutdown_timeout` (string) - The amount of time to wait after executing the `shutdown_command` for the virtual machine to actually shut down. If it doesn't shut down in this time, it is an error. By default, the timeout is "5m", or five minutes. -- `skip_compaction` (boolean) - Virtual disk image is compacted at the end of +- `skip_compaction` (boolean) - Virtual disk image is compacted at the end of the build process using `prl_disk_tool` utility. In certain rare cases, this might corrupt the resulting disk image. If you find this to be the case, you can disable compaction using this configuration value. -- `vm_name` (string) - This is the name of the virtual machine when it is +- `vm_name` (string) - This is the name of the virtual machine when it is imported as well as the name of the PVM directory when the virtual machine is exported. By default this is "packer-BUILDNAME", where "BUILDNAME" is the name of the build. @@ -186,47 +186,47 @@ simulating a human actually typing the keyboard. There are a set of special keys available. If these are in your boot command, they will be replaced by the proper key: -- `` - Backspace +- `` - Backspace -- `` - Delete +- `` - Delete -- `` and `` - Simulates an actual "enter" or "return" keypress. +- `` and `` - Simulates an actual "enter" or "return" keypress. -- `` - Simulates pressing the escape key. +- `` - Simulates pressing the escape key. -- `` - Simulates pressing the tab key. +- `` - Simulates pressing the tab key. -- `` - `` - Simulates pressing a function key. +- `` - `` - Simulates pressing a function key. -- `` `` `` `` - Simulates pressing an arrow key. +- `` `` `` `` - Simulates pressing an arrow key. -- `` - Simulates pressing the spacebar. +- `` - Simulates pressing the spacebar. -- `` - Simulates pressing the insert key. +- `` - Simulates pressing the insert key. -- `` `` - Simulates pressing the home and end keys. +- `` `` - Simulates pressing the home and end keys. -- `` `` - Simulates pressing the page up and page down keys. +- `` `` - Simulates pressing the page up and page down keys. -- `` `` - Simulates pressing the alt key. +- `` `` - Simulates pressing the alt key. -- `` `` - Simulates pressing the ctrl key. +- `` `` - Simulates pressing the ctrl key. -- `` `` - Simulates pressing the shift key. +- `` `` - Simulates pressing the shift key. -- `` `` - Simulates pressing and holding the alt key. +- `` `` - Simulates pressing and holding the alt key. -- `` `` - Simulates pressing and holding the ctrl key. +- `` `` - Simulates pressing and holding the ctrl key. -- `` `` - Simulates pressing and holding the shift key. +- `` `` - Simulates pressing and holding the shift key. -- `` `` - Simulates releasing a held alt key. +- `` `` - Simulates releasing a held alt key. -- `` `` - Simulates releasing a held ctrl key. +- `` `` - Simulates releasing a held ctrl key. -- `` `` - Simulates releasing a held shift key. +- `` `` - Simulates releasing a held shift key. -- `` `` `` - Adds a 1, 5 or 10 second pause before +- `` `` `` - Adds a 1, 5 or 10 second pause before sending any additional keys. This is useful if you have to generally wait for the UI to update before typing more. @@ -246,7 +246,7 @@ Extra `prlctl` commands are defined in the template in the `prlctl` section. An example is shown below that sets the memory and number of CPUs within the virtual machine: -```json +``` json { "prlctl": [ ["set", "{{.Name}}", "--memsize", "1024"], diff --git a/website/source/docs/builders/parallels.html.md b/website/source/docs/builders/parallels.html.md index 6f4bfa3aa..d88cfc1ce 100644 --- a/website/source/docs/builders/parallels.html.md +++ b/website/source/docs/builders/parallels.html.md @@ -1,10 +1,10 @@ --- +description: | + The Parallels Packer builder is able to create Parallels Desktop for Mac + virtual machines and export them in the PVM format. layout: docs -sidebar_current: docs-builders-parallels -page_title: Parallels - Builders -description: |- - The Parallels Packer builder is able to create Parallels Desktop for Mac - virtual machines and export them in the PVM format. +page_title: 'Parallels - Builders' +sidebar_current: 'docs-builders-parallels' --- # Parallels Builder @@ -17,16 +17,16 @@ Packer actually comes with multiple builders able to create Parallels machines, depending on the strategy you want to use to build the image. Packer supports the following Parallels builders: -- [parallels-iso](/docs/builders/parallels-iso.html) - Starts from an ISO file, - creates a brand new Parallels VM, installs an OS, provisions software within - the OS, then exports that machine to create an image. This is best for people - who want to start from scratch. +- [parallels-iso](/docs/builders/parallels-iso.html) - Starts from an ISO file, + creates a brand new Parallels VM, installs an OS, provisions software within + the OS, then exports that machine to create an image. This is best for people + who want to start from scratch. -- [parallels-pvm](/docs/builders/parallels-pvm.html) - This builder imports an - existing PVM file, runs provisioners on top of that VM, and exports that - machine to create an image. This is best if you have an existing Parallels VM - export you want to use as the source. As an additional benefit, you can feed - the artifact of this builder back into itself to iterate on a machine. +- [parallels-pvm](/docs/builders/parallels-pvm.html) - This builder imports an + existing PVM file, runs provisioners on top of that VM, and exports that + machine to create an image. This is best if you have an existing Parallels VM + export you want to use as the source. As an additional benefit, you can feed + the artifact of this builder back into itself to iterate on a machine. ## Requirements diff --git a/website/source/docs/builders/profitbricks.html.md b/website/source/docs/builders/profitbricks.html.md index e7d7ead24..e317d5a0c 100644 --- a/website/source/docs/builders/profitbricks.html.md +++ b/website/source/docs/builders/profitbricks.html.md @@ -1,9 +1,8 @@ --- +description: 'The ProfitBricks builder is able to create images for ProfitBricks cloud.' layout: docs -sidebar_current: docs-builders-profitbricks -page_title: ProfitBricks - Builders -description: |- - The ProfitBricks builder is able to create images for ProfitBricks cloud. +page_title: 'ProfitBricks - Builders' +sidebar_current: 'docs-builders-profitbricks' --- # ProfitBricks Builder @@ -24,39 +23,37 @@ builder. ### Required -- `image` (string) - ProfitBricks volume image. Only Linux public images are supported. To obtain full list of available images you can use [ProfitBricks CLI](https://github.com/profitbricks/profitbricks-cli#image). +- `image` (string) - ProfitBricks volume image. Only Linux public images are supported. To obtain full list of available images you can use [ProfitBricks CLI](https://github.com/profitbricks/profitbricks-cli#image). -- `password` (string) - ProfitBricks password. This can be specified via environment variable `PROFITBRICKS_PASSWORD', if provided. The value definded in the config has precedence over environemnt variable. - -- `username` (string) - ProfitBricks username. This can be specified via environment variable `PROFITBRICKS_USERNAME', if provided. The value definded in the config has precedence over environemnt variable. +- `password` (string) - ProfitBricks password. This can be specified via environment variable \`PROFITBRICKS\_PASSWORD', if provided. The value definded in the config has precedence over environemnt variable. +- `username` (string) - ProfitBricks username. This can be specified via environment variable \`PROFITBRICKS\_USERNAME', if provided. The value definded in the config has precedence over environemnt variable. ### Optional -- `cores` (integer) - Amount of CPU cores to use for this build. Defaults to "4". +- `cores` (integer) - Amount of CPU cores to use for this build. Defaults to "4". -- `disk_size` (string) - Amount of disk space for this image in GB. Defaults to "50" +- `disk_size` (string) - Amount of disk space for this image in GB. Defaults to "50" -- `disk_type` (string) - Type of disk to use for this image. Defaults to "HDD". +- `disk_type` (string) - Type of disk to use for this image. Defaults to "HDD". -- `location` (string) - Defaults to "us/las". +- `location` (string) - Defaults to "us/las". -- `ram` (integer) - Amount of RAM to use for this image. Defalts to "2048". +- `ram` (integer) - Amount of RAM to use for this image. Defalts to "2048". -- `retries` (string) - Number of retries Packer will make status requests while waiting for the build to complete. Default value 120 seconds. +- `retries` (string) - Number of retries Packer will make status requests while waiting for the build to complete. Default value 120 seconds. -- `snapshot_name` (string) - If snapshot name is not provided Packer will generate it +- `snapshot_name` (string) - If snapshot name is not provided Packer will generate it -- `snapshot_password` (string) - Password for the snapshot. - -- `url` (string) - Endpoint for the ProfitBricks REST API. Default URL "https://api.profitbricks.com/rest/v2" +- `snapshot_password` (string) - Password for the snapshot. +- `url` (string) - Endpoint for the ProfitBricks REST API. Default URL "" ## Example Here is a basic example: -```json +``` json { "builders": [ { diff --git a/website/source/docs/builders/qemu.html.md b/website/source/docs/builders/qemu.html.md index 0ce059049..e297c0967 100644 --- a/website/source/docs/builders/qemu.html.md +++ b/website/source/docs/builders/qemu.html.md @@ -1,10 +1,10 @@ --- +description: | + The Qemu Packer builder is able to create KVM and Xen virtual machine images. + Support for Xen is experimental at this time. layout: docs -sidebar_current: docs-builders-qemu -page_title: QEMU - Builders -description: |- - The Qemu Packer builder is able to create KVM and Xen virtual machine images. - Support for Xen is experimental at this time. +page_title: 'QEMU - Builders' +sidebar_current: 'docs-builders-qemu' --- # QEMU Builder @@ -26,7 +26,7 @@ necessary to run the virtual machine on KVM or Xen. Here is a basic example. This example is functional so long as you fixup paths to files, URLS for ISOs and checksums. -```json +``` json { "builders": [ @@ -84,87 +84,87 @@ Linux server and have not enabled X11 forwarding (`ssh -X`). ### Required: -- `iso_checksum` (string) - The checksum for the OS ISO file. Because ISO +- `iso_checksum` (string) - The checksum for the OS ISO file. Because ISO files are so large, this is required and Packer will verify it prior to booting a virtual machine with the ISO attached. The type of the checksum is specified with `iso_checksum_type`, documented below. At least one of `iso_checksum` and `iso_checksum_url` must be defined. This has precedence over `iso_checksum_url` type. -- `iso_checksum_type` (string) - The type of the checksum specified in +- `iso_checksum_type` (string) - The type of the checksum specified in `iso_checksum`. Valid values are "none", "md5", "sha1", "sha256", or "sha512" currently. While "none" will skip checksumming, this is not recommended since ISO files are generally large and corruption does happen from time to time. -- `iso_checksum_url` (string) - A URL to a GNU or BSD style checksum file +- `iso_checksum_url` (string) - A URL to a GNU or BSD style checksum file containing a checksum for the OS ISO file. At least one of `iso_checksum` and `iso_checksum_url` must be defined. This will be ignored if `iso_checksum` is non empty. -- `iso_url` (string) - A URL to the ISO containing the installation image. +- `iso_url` (string) - A URL to the ISO containing the installation image. This URL can be either an HTTP URL or a file URL (or path to a file). If this is an HTTP URL, Packer will download it and cache it between runs. This can also be a URL to an IMG or QCOW2 file, in which case QEMU will boot directly from it. When passing a path to an IMG or QCOW2 file, you should set `disk_image` to "true". -- `ssh_username` (string) - The username to use to SSH into the machine once +- `ssh_username` (string) - The username to use to SSH into the machine once the OS is installed. ### Optional: -- `accelerator` (string) - The accelerator type to use when running the VM. +- `accelerator` (string) - The accelerator type to use when running the VM. This may be `none`, `kvm`, `tcg`, or `xen`. The appropriate software must already been installed on your build machine to use the accelerator you specified. When no accelerator is specified, Packer will try to use `kvm` if it is available but will default to `tcg` otherwise. -- `boot_command` (array of strings) - This is an array of commands to type +- `boot_command` (array of strings) - This is an array of commands to type when the virtual machine is first booted. The goal of these commands should be to type just enough to initialize the operating system installer. Special keys can be typed as well, and are covered in the section below on the boot command. If this is not specified, it is assumed the installer will start itself. -- `boot_wait` (string) - The time to wait after booting the initial virtual +- `boot_wait` (string) - The time to wait after booting the initial virtual machine before typing the `boot_command`. The value of this should be a duration. Examples are "5s" and "1m30s" which will cause Packer to wait five seconds and one minute 30 seconds, respectively. If this isn't specified, the default is 10 seconds. -- `disk_cache` (string) - The cache mode to use for disk. Allowed values +- `disk_cache` (string) - The cache mode to use for disk. Allowed values include any of "writethrough", "writeback", "none", "unsafe" or "directsync". By default, this is set to "writeback". -- `disk_compression` (boolean) - Apply compression to the QCOW2 disk file +- `disk_compression` (boolean) - Apply compression to the QCOW2 disk file using `qemu-img convert`. Defaults to `false`. -- `disk_discard` (string) - The discard mode to use for disk. Allowed values +- `disk_discard` (string) - The discard mode to use for disk. Allowed values include any of "unmap" or "ignore". By default, this is set to "ignore". -- `disk_image` (boolean) - Packer defaults to building from an ISO file, this +- `disk_image` (boolean) - Packer defaults to building from an ISO file, this parameter controls whether the ISO URL supplied is actually a bootable QEMU image. When this value is set to true, the machine will clone the source, resize it according to `disk_size` and boot the image. -- `disk_interface` (string) - The interface to use for the disk. Allowed - values include any of "ide", "scsi", "virtio" or "virtio-scsi"^* . Note also +- `disk_interface` (string) - The interface to use for the disk. Allowed + values include any of "ide", "scsi", "virtio" or "virtio-scsi"^\* . Note also that any boot commands or kickstart type scripts must have proper adjustments for resulting device names. The Qemu builder uses "virtio" by default. - ^* Please be aware that use of the "scsi" disk interface has been disabled + ^\* Please be aware that use of the "scsi" disk interface has been disabled by Red Hat due to a bug described [here](https://bugzilla.redhat.com/show_bug.cgi?id=1019220). If you are running Qemu on RHEL or a RHEL variant such as CentOS, you *must* choose one of the other listed interfaces. Using the "scsi" interface under these circumstances will cause the build to fail. -- `disk_size` (integer) - The size, in megabytes, of the hard disk to create +- `disk_size` (integer) - The size, in megabytes, of the hard disk to create for the VM. By default, this is 40000 (about 40 GB). -- `floppy_files` (array of strings) - A list of files to place onto a floppy +- `floppy_files` (array of strings) - A list of files to place onto a floppy disk that is attached when the VM is booted. This is most useful for unattended Windows installs, which look for an `Autounattend.xml` file on removable media. By default, no floppy will be attached. All files listed in @@ -174,10 +174,9 @@ Linux server and have not enabled X11 forwarding (`ssh -X`). and \[\]) are allowed. Directory names are also allowed, which will add all the files found in the directory to the floppy. The summary size of the listed files must not exceed 1.44 MB. The supported ways to move large - files into the OS are using `http_directory` or [the file provisioner]( - https://www.packer.io/docs/provisioners/file.html). + files into the OS are using `http_directory` or [the file provisioner](https://www.packer.io/docs/provisioners/file.html). -- `floppy_dirs` (array of strings) - A list of directories to place onto +- `floppy_dirs` (array of strings) - A list of directories to place onto the floppy disk recursively. This is similar to the `floppy_files` option except that the directory structure is preserved. This is useful for when your floppy disk includes drivers or if you just want to organize it's @@ -185,76 +184,76 @@ Linux server and have not enabled X11 forwarding (`ssh -X`). The maximum summary size of all files in the listed directories are the same as in `floppy_files`. -- `format` (string) - Either "qcow2" or "raw", this specifies the output +- `format` (string) - Either "qcow2" or "raw", this specifies the output format of the virtual machine image. This defaults to `qcow2`. -- `headless` (boolean) - Packer defaults to building QEMU virtual machines by +- `headless` (boolean) - Packer defaults to building QEMU virtual machines by launching a GUI that shows the console of the machine being built. When this value is set to true, the machine will start without a console. You can still see the console if you make a note of the VNC display number chosen, and then connect using `vncviewer -Shared :` -- `http_directory` (string) - Path to a directory to serve using an +- `http_directory` (string) - Path to a directory to serve using an HTTP server. The files in this directory will be available over HTTP that will be requestable from the virtual machine. This is useful for hosting kickstart files and so on. By default this is "", which means no HTTP server will be started. The address and port of the HTTP server will be available as variables in `boot_command`. This is covered in more detail below. -- `http_port_min` and `http_port_max` (integer) - These are the minimum and +- `http_port_min` and `http_port_max` (integer) - These are the minimum and maximum port to use for the HTTP server started to serve the `http_directory`. Because Packer often runs in parallel, Packer will choose a randomly available port in this range to run the HTTP server. If you want to force the HTTP server to be on one port, make this minimum and maximum port the same. By default the values are 8000 and 9000, respectively. -- `iso_skip_cache` (boolean) - Use iso from provided url. Qemu must support +- `iso_skip_cache` (boolean) - Use iso from provided url. Qemu must support curl block device. This defaults to `false`. -- `iso_target_extension` (string) - The extension of the iso file after +- `iso_target_extension` (string) - The extension of the iso file after download. This defaults to "iso". -- `iso_target_path` (string) - The path where the iso should be saved after +- `iso_target_path` (string) - The path where the iso should be saved after download. By default will go in the packer cache, with a hash of the original filename as its name. -- `iso_urls` (array of strings) - Multiple URLs for the ISO to download. +- `iso_urls` (array of strings) - Multiple URLs for the ISO to download. Packer will try these in order. If anything goes wrong attempting to download or while downloading a single URL, it will move on to the next. All URLs must point to the same file (same checksum). By default this is empty and `iso_url` is used. Only one of `iso_url` or `iso_urls` can be specified. -- `machine_type` (string) - The type of machine emulation to use. Run your +- `machine_type` (string) - The type of machine emulation to use. Run your qemu binary with the flags `-machine help` to list available types for your system. This defaults to "pc". -- `net_device` (string) - The driver to use for the network interface. Allowed +- `net_device` (string) - The driver to use for the network interface. Allowed values "ne2k\_pci", "i82551", "i82557b", "i82559er", "rtl8139", "e1000", "pcnet", "virtio", "virtio-net", "virtio-net-pci", "usb-net", "i82559a", "i82559b", "i82559c", "i82550", "i82562", "i82557a", "i82557c", "i82801", - "vmxnet3", "i82558a" or "i82558b". The Qemu builder uses "virtio-net" by + "vmxnet3", "i82558a" or "i82558b". The Qemu builder uses "virtio-net" by default. -- `output_directory` (string) - This is the path to the directory where the +- `output_directory` (string) - This is the path to the directory where the resulting virtual machine will be created. This may be relative or absolute. If relative, the path is relative to the working directory when `packer` is executed. This directory must not exist or be empty prior to running the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the name of the build. -- `qemu_binary` (string) - The name of the Qemu binary to look for. This +- `qemu_binary` (string) - The name of the Qemu binary to look for. This defaults to "qemu-system-x86\_64", but may need to be changed for some platforms. For example "qemu-kvm", or "qemu-system-i386" may be a better choice for some systems. -- `qemuargs` (array of array of strings) - Allows complete control over the +- `qemuargs` (array of array of strings) - Allows complete control over the qemu command line (though not, at this time, qemu-img). Each array of strings makes up a command line switch that overrides matching default switch/value pairs. Any value specified as an empty string is ignored. All values after the switch are concatenated with no separator. -~> **Warning:** The qemu command line allows extreme flexibility, so beware +~> **Warning:** The qemu command line allows extreme flexibility, so beware of conflicting arguments causing failures of your run. For instance, using --no-acpi could break the ability to send power signal type commands (e.g., shutdown -P now) to the virtual machine, thus preventing proper shutdown. To see @@ -263,7 +262,7 @@ command. The arguments are all printed for review. The following shows a sample usage: -```json +``` json { "qemuargs": [ [ "-m", "1024M" ], @@ -282,23 +281,23 @@ The following shows a sample usage: would produce the following (not including other defaults supplied by the builder and not otherwise conflicting with the qemuargs): -```text +``` text qemu-system-x86 -m 1024m --no-acpi -netdev user,id=mynet0,hostfwd=hostip:hostport-guestip:guestport -device virtio-net,netdev=mynet0" ``` -~> **Windows Users:** [QEMU for Windows](https://qemu.weilnetz.de/) builds are available though an environmental variable does need +~> **Windows Users:** [QEMU for Windows](https://qemu.weilnetz.de/) builds are available though an environmental variable does need to be set for QEMU for Windows to redirect stdout to the console instead of stdout.txt. The following shows the environment variable that needs to be set for Windows QEMU support: -```text +``` text setx SDL_STDIO_REDIRECT=0 ``` You can also use the `SSHHostPort` template variable to produce a packer template that can be invoked by `make` in parallel: -```json +``` json { "qemuargs": [ [ "-netdev", "user,hostfwd=tcp::{{ .SSHHostPort }}-:22,id=forward"], @@ -306,16 +305,17 @@ template that can be invoked by `make` in parallel: ] } ``` + `make -j 3 my-awesome-packer-templates` spawns 3 packer processes, each of which will bind to their own SSH port as determined by each process. This will also work with WinRM, just change the port forward in `qemuargs` to map to WinRM's default port of `5985` or whatever value you have the service set to listen on. -- `use_default_display` (boolean) - If true, do not pass a `-display` option +- `use_default_display` (boolean) - If true, do not pass a `-display` option to qemu, allowing it to choose the default. This may be needed when running under OS X. -- `shutdown_command` (string) - The command to use to gracefully shut down the +- `shutdown_command` (string) - The command to use to gracefully shut down the machine once all the provisioning is done. By default this is an empty string, which tells Packer to just forcefully shut down the machine unless a shutdown command takes place inside script so this may safely be omitted. If @@ -323,30 +323,30 @@ default port of `5985` or whatever value you have the service set to listen on. since reboots may fail and specify the final shutdown command in your last script. -- `shutdown_timeout` (string) - The amount of time to wait after executing the +- `shutdown_timeout` (string) - The amount of time to wait after executing the `shutdown_command` for the virtual machine to actually shut down. If it doesn't shut down in this time, it is an error. By default, the timeout is `5m`, or five minutes. -- `skip_compaction` (boolean) - Packer compacts the QCOW2 image using `qemu-img convert`. +- `skip_compaction` (boolean) - Packer compacts the QCOW2 image using `qemu-img convert`. Set this option to `true` to disable compacting. Defaults to `false`. -- `ssh_host_port_min` and `ssh_host_port_max` (integer) - The minimum and +- `ssh_host_port_min` and `ssh_host_port_max` (integer) - The minimum and maximum port to use for the SSH port on the host machine which is forwarded to the SSH port on the guest machine. Because Packer often runs in parallel, Packer will choose a randomly available port in this range to use as the host port. By default this is 2222 to 4444. -- `vm_name` (string) - This is the name of the image (QCOW2 or IMG) file for +- `vm_name` (string) - This is the name of the image (QCOW2 or IMG) file for the new virtual machine. By default this is "packer-BUILDNAME", where `BUILDNAME` is the name of the build. Currently, no file extension will be used unless it is specified in this option. -- `vnc_bind_address` (string / IP address) - The IP address that should be binded +- `vnc_bind_address` (string / IP address) - The IP address that should be binded to for VNC. By default packer will use 127.0.0.1 for this. If you wish to bind to all interfaces use 0.0.0.0 -- `vnc_port_min` and `vnc_port_max` (integer) - The minimum and maximum port +- `vnc_port_min` and `vnc_port_max` (integer) - The minimum and maximum port to use for VNC access to the virtual machine. The builder uses VNC to type the initial `boot_command`. Because Packer generally runs in parallel, Packer uses a randomly chosen port in this range that appears available. By @@ -366,59 +366,59 @@ template. The boot command is "typed" character for character over a VNC connection to the machine, simulating a human actually typing the keyboard. --> Keystrokes are typed as separate key up/down events over VNC with a - default 100ms delay. The delay alleviates issues with latency and CPU - contention. For local builds you can tune this delay by specifying - e.g. `PACKER_KEY_INTERVAL=10ms` to speed through the boot command. +-> Keystrokes are typed as separate key up/down events over VNC with a +default 100ms delay. The delay alleviates issues with latency and CPU +contention. For local builds you can tune this delay by specifying +e.g. `PACKER_KEY_INTERVAL=10ms` to speed through the boot command. There are a set of special keys available. If these are in your boot command, they will be replaced by the proper key: -- `` - Backspace +- `` - Backspace -- `` - Delete +- `` - Delete -- `` and `` - Simulates an actual "enter" or "return" keypress. +- `` and `` - Simulates an actual "enter" or "return" keypress. -- `` - Simulates pressing the escape key. +- `` - Simulates pressing the escape key. -- `` - Simulates pressing the tab key. +- `` - Simulates pressing the tab key. -- `` - `` - Simulates pressing a function key. +- `` - `` - Simulates pressing a function key. -- `` `` `` `` - Simulates pressing an arrow key. +- `` `` `` `` - Simulates pressing an arrow key. -- `` - Simulates pressing the spacebar. +- `` - Simulates pressing the spacebar. -- `` - Simulates pressing the insert key. +- `` - Simulates pressing the insert key. -- `` `` - Simulates pressing the home and end keys. +- `` `` - Simulates pressing the home and end keys. -- `` `` - Simulates pressing the page up and page down keys. +- `` `` - Simulates pressing the page up and page down keys. -- `` `` - Simulates pressing the alt key. +- `` `` - Simulates pressing the alt key. -- `` `` - Simulates pressing the ctrl key. +- `` `` - Simulates pressing the ctrl key. -- `` `` - Simulates pressing the shift key. +- `` `` - Simulates pressing the shift key. -- `` `` - Simulates pressing and holding the alt key. +- `` `` - Simulates pressing and holding the alt key. -- `` `` - Simulates pressing and holding the ctrl key. +- `` `` - Simulates pressing and holding the ctrl key. -- `` `` - Simulates pressing and holding the shift key. +- `` `` - Simulates pressing and holding the shift key. -- `` `` - Simulates releasing a held alt key. +- `` `` - Simulates releasing a held alt key. -- `` `` - Simulates releasing a held ctrl key. +- `` `` - Simulates releasing a held ctrl key. -- `` `` - Simulates releasing a held shift key. +- `` `` - Simulates releasing a held shift key. -- `` `` `` - Adds a 1, 5 or 10 second pause before +- `` `` `` - Adds a 1, 5 or 10 second pause before sending any additional keys. This is useful if you have to generally wait for the UI to update before typing more. -- ` ` - Add user defined time.Duration pause before sending any +- `` - Add user defined time.Duration pause before sending any additional keys. For example `` or `` When using modifier keys `ctrl`, `alt`, `shift` ensure that you release them, @@ -430,7 +430,7 @@ In addition to the special keys, each command to type is treated as a [template engine](/docs/templates/engine.html). The available variables are: -- `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server +- `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server that is started serving the directory specified by the `http_directory` configuration parameter. If `http_directory` isn't specified, these will be blank! @@ -438,7 +438,7 @@ available variables are: Example boot command. This is actually a working boot command used to start an CentOS 6.4 installer: -```json +``` json { "boot_command": [ "", diff --git a/website/source/docs/builders/triton.html.md b/website/source/docs/builders/triton.html.md index 68a661b73..f1f670eb8 100644 --- a/website/source/docs/builders/triton.html.md +++ b/website/source/docs/builders/triton.html.md @@ -1,12 +1,12 @@ --- +description: | + The triton Packer builder is able to create new images for use with Triton. + These images can be used with both the Joyent public cloud (which is powered + by Triton) as well with private Triton installations. This builder uses the + Triton Cloud API to create images. layout: docs -sidebar_current: docs-builders-triton -page_title: Triton - Builders -description: |- - The triton Packer builder is able to create new images for use with Triton. - These images can be used with both the Joyent public cloud (which is powered - by Triton) as well with private Triton installations. This builder uses the - Triton Cloud API to create images. +page_title: 'Triton - Builders' +sidebar_current: 'docs-builders-triton' --- # Triton Builder @@ -30,7 +30,7 @@ This reusable image can then be used to launch new machines. The builder does *not* manage images. Once it creates an image, it is up to you to use it or delete it. -~> **Private installations of Triton must have custom images enabled!** To use +~> **Private installations of Triton must have custom images enabled!** To use the Triton builder with a private/on-prem installation of Joyent's Triton software, you'll need an operator to manually [enable custom images](https://docs.joyent.com/private-cloud/install/image-management) @@ -48,14 +48,14 @@ builder. ### Required: -- `triton_account` (string) - The username of the Triton account to use when +- `triton_account` (string) - The username of the Triton account to use when using the Triton Cloud API. -- `triton_key_id` (string) - The fingerprint of the public key of the SSH key +- `triton_key_id` (string) - The fingerprint of the public key of the SSH key pair to use for authentication with the Triton Cloud API. If `triton_key_material` is not set, it is assumed that the SSH agent has the private key corresponding to this key ID loaded. -- `source_machine_image` (string) - The UUID of the image to base the new +- `source_machine_image` (string) - The UUID of the image to base the new image on. Triton supports multiple types of images, called 'brands' in Triton / Joyent lingo, for contains and VM's. See the chapter [Containers and virtual machines](https://docs.joyent.com/public-cloud/instances) in the @@ -66,40 +66,40 @@ builder. `70e3ae72-96b6-11e6-9056-9737fd4d0764` for version 16.3.1 of the 64bit SmartOS base image (a 'joyent' brand image). -- `source_machine_package` (string) - The Triton package to use while building +- `source_machine_package` (string) - The Triton package to use while building the image. Does not affect (and does not have to be the same) as the package which will be used for a VM instance running this image. On the Joyent public cloud this could for example be `g3-standard-0.5-smartos`. -- `image_name` (string) - The name the finished image in Triton will be +- `image_name` (string) - The name the finished image in Triton will be assigned. Maximum 512 characters but should in practice be much shorter (think between 5 and 20 characters). For example `postgresql-95-server` for an image used as a PostgreSQL 9.5 server. -- `image_version` (string) - The version string for this image. Maximum 128 +- `image_version` (string) - The version string for this image. Maximum 128 characters. Any string will do but a format of `Major.Minor.Patch` is strongly advised by Joyent. See [Semantic Versioning](http://semver.org/) for more information on the `Major.Minor.Patch` versioning format. ### Optional: -- `triton_url` (string) - The URL of the Triton cloud API to use. If omitted +- `triton_url` (string) - The URL of the Triton cloud API to use. If omitted it will default to the `us-sw-1` region of the Joyent Public cloud. If you are using your own private Triton installation you will have to supply the URL of the cloud API of your own Triton installation. -- `triton_key_material` (string) - Path to the file in which the private key +- `triton_key_material` (string) - Path to the file in which the private key of `triton_key_id` is stored. For example `/home/soandso/.ssh/id_rsa`. If this is not specified, the SSH agent is used to sign requests with the `triton_key_id` specified. -- `source_machine_firewall_enabled` (boolean) - Whether or not the firewall of +- `source_machine_firewall_enabled` (boolean) - Whether or not the firewall of the VM used to create an image of is enabled. The Triton firewall only filters inbound traffic to the VM. All outbound traffic is always allowed. Currently this builder does not provide an interface to add specific firewall rules. Unless you have a global rule defined in Triton which allows SSH traffic enabling the firewall will interfere with the SSH provisioner. The default is `false`. -- `source_machine_metadata` (object of key/value strings) - Triton metadata +- `source_machine_metadata` (object of key/value strings) - Triton metadata applied to the VM used to create the image. Metadata can be used to pass configuration information to the VM without the need for networking. See [Using the metadata @@ -107,38 +107,38 @@ builder. Joyent documentation for more information. This can for example be used to set the `user-script` metadata key to have Triton start a user supplied script after the VM has booted. -- `source_machine_name` (string) - Name of the VM used for building the image. +- `source_machine_name` (string) - Name of the VM used for building the image. Does not affect (and does not have to be the same) as the name for a VM instance running this image. Maximum 512 characters but should in practice be much shorter (think between 5 and 20 characters). For example `mysql-64-server-image-builder`. When omitted defaults to `packer-builder-[image_name]`. -- `source_machine_networks` (array of strings) - The UUID's of Triton networks +- `source_machine_networks` (array of strings) - The UUID's of Triton networks added to the source machine used for creating the image. For example if any of the provisioners which are run need Internet access you will need to add the UUID's of the appropriate networks here. If this is not specified, instances will be placed into the default Triton public and internal networks. -- `source_machine_tags` (object of key/value strings) - Tags applied to the VM +- `source_machine_tags` (object of key/value strings) - Tags applied to the VM used to create the image. -- `image_acls` (array of strings) - The UUID's of the users which will have +- `image_acls` (array of strings) - The UUID's of the users which will have access to this image. When omitted only the owner (the Triton user whose credentials are used) will have access to the image. -- `image_description` (string) - Description of the image. Maximum 512 +- `image_description` (string) - Description of the image. Maximum 512 characters. -- `image_eula_url` (string) - URL of the End User License Agreement (EULA) for +- `image_eula_url` (string) - URL of the End User License Agreement (EULA) for the image. Maximum 128 characters. -- `image_homepage` (string) - URL of the homepage where users can find +- `image_homepage` (string) - URL of the homepage where users can find information about the image. Maximum 128 characters. -- `image_tags` (object of key/value strings) - Tag applied to the image. +- `image_tags` (object of key/value strings) - Tag applied to the image. ## Basic Example Below is a minimal example to create an joyent-brand image on the Joyent public cloud: -```json +``` json { "builders": [ { diff --git a/website/source/docs/builders/virtualbox-iso.html.md b/website/source/docs/builders/virtualbox-iso.html.md index 041de9742..258f7ba90 100644 --- a/website/source/docs/builders/virtualbox-iso.html.md +++ b/website/source/docs/builders/virtualbox-iso.html.md @@ -1,10 +1,10 @@ --- +description: | + The VirtualBox Packer builder is able to create VirtualBox virtual machines + and export them in the OVF format, starting from an ISO image. layout: docs -sidebar_current: docs-builders-virtualbox-iso -page_title: VirtualBox ISO - Builders -description: |- - The VirtualBox Packer builder is able to create VirtualBox virtual machines - and export them in the OVF format, starting from an ISO image. +page_title: 'VirtualBox ISO - Builders' +sidebar_current: 'docs-builders-virtualbox-iso' --- # VirtualBox Builder (from an ISO) @@ -26,7 +26,7 @@ Here is a basic example. This example is not functional. It will start the OS installer but then fail because we don't provide the preseed file for Ubuntu to self-install. Still, the example serves to show the basic configuration: -```json +``` json { "type": "virtualbox-iso", "guest_os_type": "Ubuntu_64", @@ -55,59 +55,59 @@ builder. ### Required: -- `iso_checksum` (string) - The checksum for the OS ISO file. Because ISO +- `iso_checksum` (string) - The checksum for the OS ISO file. Because ISO files are so large, this is required and Packer will verify it prior to booting a virtual machine with the ISO attached. The type of the checksum is specified with `iso_checksum_type`, documented below. At least one of `iso_checksum` and `iso_checksum_url` must be defined. This has precedence over `iso_checksum_url` type. -- `iso_checksum_type` (string) - The type of the checksum specified in +- `iso_checksum_type` (string) - The type of the checksum specified in `iso_checksum`. Valid values are "none", "md5", "sha1", "sha256", or "sha512" currently. While "none" will skip checksumming, this is not recommended since ISO files are generally large and corruption does happen from time to time. -- `iso_checksum_url` (string) - A URL to a GNU or BSD style checksum file +- `iso_checksum_url` (string) - A URL to a GNU or BSD style checksum file containing a checksum for the OS ISO file. At least one of `iso_checksum` and `iso_checksum_url` must be defined. This will be ignored if `iso_checksum` is non empty. -- `iso_url` (string) - A URL to the ISO containing the installation image. +- `iso_url` (string) - A URL to the ISO containing the installation image. This URL can be either an HTTP URL or a file URL (or path to a file). If this is an HTTP URL, Packer will download it and cache it between runs. -- `ssh_username` (string) - The username to use to SSH into the machine once +- `ssh_username` (string) - The username to use to SSH into the machine once the OS is installed. -- `ssh_password` (string) - The password to use to SSH into the machine once +- `ssh_password` (string) - The password to use to SSH into the machine once the OS is installed. ### Optional: -- `boot_command` (array of strings) - This is an array of commands to type +- `boot_command` (array of strings) - This is an array of commands to type when the virtual machine is first booted. The goal of these commands should be to type just enough to initialize the operating system installer. Special keys can be typed as well, and are covered in the section below on the boot command. If this is not specified, it is assumed the installer will start itself. -- `boot_wait` (string) - The time to wait after booting the initial virtual +- `boot_wait` (string) - The time to wait after booting the initial virtual machine before typing the `boot_command`. The value of this should be a duration. Examples are "5s" and "1m30s" which will cause Packer to wait five seconds and one minute 30 seconds, respectively. If this isn't specified, the default is 10 seconds. -- `disk_size` (integer) - The size, in megabytes, of the hard disk to create +- `disk_size` (integer) - The size, in megabytes, of the hard disk to create for the VM. By default, this is 40000 (about 40 GB). -- `export_opts` (array of strings) - Additional options to pass to the +- `export_opts` (array of strings) - Additional options to pass to the [VBoxManage export](https://www.virtualbox.org/manual/ch08.html#vboxmanage-export). This can be useful for passing product information to include in the resulting appliance file. Packer JSON configuration file example: - ```json + ``` json { "type": "virtualbox-iso", "export_opts": @@ -143,7 +143,7 @@ builder. "packer_conf.json" ``` -- `floppy_files` (array of strings) - A list of files to place onto a floppy +- `floppy_files` (array of strings) - A list of files to place onto a floppy disk that is attached when the VM is booted. This is most useful for unattended Windows installs, which look for an `Autounattend.xml` file on removable media. By default, no floppy will be attached. All files listed in @@ -153,16 +153,16 @@ builder. and \[\]) are allowed. Directory names are also allowed, which will add all the files found in the directory to the floppy. -- `floppy_dirs` (array of strings) - A list of directories to place onto +- `floppy_dirs` (array of strings) - A list of directories to place onto the floppy disk recursively. This is similar to the `floppy_files` option except that the directory structure is preserved. This is useful for when your floppy disk includes drivers or if you just want to organize it's contents as a hierarchy. Wildcard characters (\*, ?, and \[\]) are allowed. -- `format` (string) - Either "ovf" or "ova", this specifies the output format +- `format` (string) - Either "ovf" or "ova", this specifies the output format of the exported virtual machine. This defaults to "ovf". -- `guest_additions_mode` (string) - The method by which guest additions are +- `guest_additions_mode` (string) - The method by which guest additions are made available to the guest for installation. Valid options are "upload", "attach", or "disable". If the mode is "attach" the guest additions ISO will be attached as a CD device to the virtual machine. If the mode is "upload" @@ -170,101 +170,101 @@ builder. `guest_additions_path`. The default value is "upload". If "disable" is used, guest additions won't be downloaded, either. -- `guest_additions_path` (string) - The path on the guest virtual machine +- `guest_additions_path` (string) - The path on the guest virtual machine where the VirtualBox guest additions ISO will be uploaded. By default this is "VBoxGuestAdditions.iso" which should upload into the login directory of the user. This is a [configuration template](/docs/templates/engine.html) where the `Version` variable is replaced with the VirtualBox version. -- `guest_additions_sha256` (string) - The SHA256 checksum of the guest +- `guest_additions_sha256` (string) - The SHA256 checksum of the guest additions ISO that will be uploaded to the guest VM. By default the checksums will be downloaded from the VirtualBox website, so this only needs to be set if you want to be explicit about the checksum. -- `guest_additions_url` (string) - The URL to the guest additions ISO +- `guest_additions_url` (string) - The URL to the guest additions ISO to upload. This can also be a file URL if the ISO is at a local path. By default, the VirtualBox builder will attempt to find the guest additions ISO on the local file system. If it is not available locally, the builder will download the proper guest additions ISO from the internet. -- `guest_os_type` (string) - The guest OS type being installed. By default +- `guest_os_type` (string) - The guest OS type being installed. By default this is "other", but you can get *dramatic* performance improvements by setting this to the proper value. To view all available values for this run `VBoxManage list ostypes`. Setting the correct value hints to VirtualBox how to optimize the virtual hardware to work best with that operating system. -- `hard_drive_interface` (string) - The type of controller that the primary +- `hard_drive_interface` (string) - The type of controller that the primary hard drive is attached to, defaults to "ide". When set to "sata", the drive is attached to an AHCI SATA controller. When set to "scsi", the drive is attached to an LsiLogic SCSI controller. -- `sata_port_count` (integer) - The number of ports available on any SATA +- `sata_port_count` (integer) - The number of ports available on any SATA controller created, defaults to 1. VirtualBox supports up to 30 ports on a maxiumum of 1 SATA controller. Increasing this value can be useful if you want to attach additional drives. -- `hard_drive_nonrotational` (boolean) - Forces some guests (i.e. Windows 7+) +- `hard_drive_nonrotational` (boolean) - Forces some guests (i.e. Windows 7+) to treat disks as SSDs and stops them from performing disk fragmentation. Also set `hard_drive_Discard` to `true` to enable TRIM support. -- `hard_drive_discard` (boolean) - When this value is set to `true`, a VDI +- `hard_drive_discard` (boolean) - When this value is set to `true`, a VDI image will be shrunk in response to the trim command from the guest OS. The size of the cleared area must be at least 1MB. Also set `hard_drive_nonrotational` to `true` to enable TRIM support. -- `headless` (boolean) - Packer defaults to building VirtualBox virtual +- `headless` (boolean) - Packer defaults to building VirtualBox virtual machines by launching a GUI that shows the console of the machine being built. When this value is set to `true`, the machine will start without a console. -- `http_directory` (string) - Path to a directory to serve using an +- `http_directory` (string) - Path to a directory to serve using an HTTP server. The files in this directory will be available over HTTP that will be requestable from the virtual machine. This is useful for hosting kickstart files and so on. By default this is "", which means no HTTP server will be started. The address and port of the HTTP server will be available as variables in `boot_command`. This is covered in more detail below. -- `http_port_min` and `http_port_max` (integer) - These are the minimum and +- `http_port_min` and `http_port_max` (integer) - These are the minimum and maximum port to use for the HTTP server started to serve the `http_directory`. Because Packer often runs in parallel, Packer will choose a randomly available port in this range to run the HTTP server. If you want to force the HTTP server to be on one port, make this minimum and maximum port the same. By default the values are 8000 and 9000, respectively. -- `iso_interface` (string) - The type of controller that the ISO is attached +- `iso_interface` (string) - The type of controller that the ISO is attached to, defaults to "ide". When set to "sata", the drive is attached to an AHCI SATA controller. -- `iso_target_extension` (string) - The extension of the iso file after +- `iso_target_extension` (string) - The extension of the iso file after download. This defaults to "iso". -- `iso_target_path` (string) - The path where the iso should be saved +- `iso_target_path` (string) - The path where the iso should be saved after download. By default will go in the packer cache, with a hash of the original filename as its name. -- `iso_urls` (array of strings) - Multiple URLs for the ISO to download. +- `iso_urls` (array of strings) - Multiple URLs for the ISO to download. Packer will try these in order. If anything goes wrong attempting to download or while downloading a single URL, it will move on to the next. All URLs must point to the same file (same checksum). By default this is empty and `iso_url` is used. Only one of `iso_url` or `iso_urls` can be specified. -- `keep_registered` (boolean) - Set this to `true` if you would like to keep +- `keep_registered` (boolean) - Set this to `true` if you would like to keep the VM registered with virtualbox. Defaults to `false`. -- `output_directory` (string) - This is the path to the directory where the +- `output_directory` (string) - This is the path to the directory where the resulting virtual machine will be created. This may be relative or absolute. If relative, the path is relative to the working directory when `packer` is executed. This directory must not exist or be empty prior to running the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the name of the build. -- `post_shutdown_delay` (string) - The amount of time to wait after shutting +- `post_shutdown_delay` (string) - The amount of time to wait after shutting down the virtual machine. If you get the error `Error removing floppy controller`, you might need to set this to `5m` or so. By default, the delay is `0s`, or disabled. -- `shutdown_command` (string) - The command to use to gracefully shut down the +- `shutdown_command` (string) - The command to use to gracefully shut down the machine once all the provisioning is done. By default this is an empty string, which tells Packer to just forcefully shut down the machine unless a shutdown command takes place inside script so this may safely be omitted. If @@ -272,26 +272,26 @@ builder. since reboots may fail and specify the final shutdown command in your last script. -- `shutdown_timeout` (string) - The amount of time to wait after executing the +- `shutdown_timeout` (string) - The amount of time to wait after executing the `shutdown_command` for the virtual machine to actually shut down. If it doesn't shut down in this time, it is an error. By default, the timeout is `5m`, or five minutes. -- `skip_export` (boolean) - Defaults to `false`. When enabled, Packer will +- `skip_export` (boolean) - Defaults to `false`. When enabled, Packer will not export the VM. Useful if the build output is not the resultant image, but created inside the VM. -- `ssh_host_port_min` and `ssh_host_port_max` (integer) - The minimum and +- `ssh_host_port_min` and `ssh_host_port_max` (integer) - The minimum and maximum port to use for the SSH port on the host machine which is forwarded to the SSH port on the guest machine. Because Packer often runs in parallel, Packer will choose a randomly available port in this range to use as the host port. By default this is 2222 to 4444. -- `ssh_skip_nat_mapping` (boolean) - Defaults to `false`. When enabled, Packer +- `ssh_skip_nat_mapping` (boolean) - Defaults to `false`. When enabled, Packer does not setup forwarded port mapping for SSH requests and uses `ssh_port` on the host to communicate to the virtual machine -- `vboxmanage` (array of array of strings) - Custom `VBoxManage` commands to +- `vboxmanage` (array of array of strings) - Custom `VBoxManage` commands to execute in order to further customize the virtual machine being created. The value of this is an array of commands to execute. The commands are executed in the order defined in the template. For each command, the command is @@ -302,26 +302,26 @@ builder. variable is replaced with the VM name. More details on how to use `VBoxManage` are below. -- `vboxmanage_post` (array of array of strings) - Identical to `vboxmanage`, +- `vboxmanage_post` (array of array of strings) - Identical to `vboxmanage`, except that it is run after the virtual machine is shutdown, and before the virtual machine is exported. -- `virtualbox_version_file` (string) - The path within the virtual machine to +- `virtualbox_version_file` (string) - The path within the virtual machine to upload a file that contains the VirtualBox version that was used to create the machine. This information can be useful for provisioning. By default this is ".vbox\_version", which will generally be upload it into the home directory. Set to an empty string to skip uploading this file, which can be useful when using the `none` communicator. -- `vm_name` (string) - This is the name of the OVF file for the new virtual +- `vm_name` (string) - This is the name of the OVF file for the new virtual machine, without the file extension. By default this is "packer-BUILDNAME", where "BUILDNAME" is the name of the build. -- `vrdp_bind_address` (string / IP address) - The IP address that should be +- `vrdp_bind_address` (string / IP address) - The IP address that should be binded to for VRDP. By default packer will use 127.0.0.1 for this. If you wish to bind to all interfaces use 0.0.0.0 -- `vrdp_port_min` and `vrdp_port_max` (integer) - The minimum and maximum port +- `vrdp_port_min` and `vrdp_port_max` (integer) - The minimum and maximum port to use for VRDP access to the virtual machine. Packer uses a randomly chosen port in this range that appears available. By default this is 5900 to 6000. The minimum and maximum ports are inclusive. @@ -342,49 +342,49 @@ machine, simulating a human actually typing the keyboard. There are a set of special keys available. If these are in your boot command, they will be replaced by the proper key: -- `` - Backspace +- `` - Backspace -- `` - Delete +- `` - Delete -- `` and `` - Simulates an actual "enter" or "return" keypress. +- `` and `` - Simulates an actual "enter" or "return" keypress. -- `` - Simulates pressing the escape key. +- `` - Simulates pressing the escape key. -- `` - Simulates pressing the tab key. +- `` - Simulates pressing the tab key. -- `` - `` - Simulates pressing a function key. +- `` - `` - Simulates pressing a function key. -- `` `` `` `` - Simulates pressing an arrow key. +- `` `` `` `` - Simulates pressing an arrow key. -- `` - Simulates pressing the spacebar. +- `` - Simulates pressing the spacebar. -- `` - Simulates pressing the insert key. +- `` - Simulates pressing the insert key. -- `` `` - Simulates pressing the home and end keys. +- `` `` - Simulates pressing the home and end keys. -- `` `` - Simulates pressing the page up and page down keys. +- `` `` - Simulates pressing the page up and page down keys. -- `` `` - Simulates pressing the alt key. +- `` `` - Simulates pressing the alt key. -- `` `` - Simulates pressing the ctrl key. +- `` `` - Simulates pressing the ctrl key. -- `` `` - Simulates pressing the shift key. +- `` `` - Simulates pressing the shift key. -- `` `` - Simulates pressing and holding the alt key. +- `` `` - Simulates pressing and holding the alt key. -- `` `` - Simulates pressing and holding the +- `` `` - Simulates pressing and holding the ctrl key. -- `` `` - Simulates pressing and holding the +- `` `` - Simulates pressing and holding the shift key. -- `` `` - Simulates releasing a held alt key. +- `` `` - Simulates releasing a held alt key. -- `` `` - Simulates releasing a held ctrl key. +- `` `` - Simulates releasing a held ctrl key. -- `` `` - Simulates releasing a held shift key. +- `` `` - Simulates releasing a held shift key. -- `` `` `` - Adds a 1, 5 or 10 second pause before +- `` `` `` - Adds a 1, 5 or 10 second pause before sending any additional keys. This is useful if you have to generally wait for the UI to update before typing more. @@ -398,7 +398,7 @@ In addition to the special keys, each command to type is treated as a [template engine](/docs/templates/engine.html). The available variables are: -- `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server +- `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server that is started serving the directory specified by the `http_directory` configuration parameter. If `http_directory` isn't specified, these will be blank! @@ -406,7 +406,7 @@ available variables are: Example boot command. This is actually a working boot command used to start an Ubuntu 12.04 installer: -```text +``` text [ "", "/install/vmlinuz noapic ", @@ -447,7 +447,7 @@ Extra VBoxManage commands are defined in the template in the `vboxmanage` section. An example is shown below that sets the memory and number of CPUs within the virtual machine: -```json +``` json { "vboxmanage": [ ["modifyvm", "{{.Name}}", "--memory", "1024"], diff --git a/website/source/docs/builders/virtualbox-ovf.html.md b/website/source/docs/builders/virtualbox-ovf.html.md index 297b040f0..5d6aa81fb 100644 --- a/website/source/docs/builders/virtualbox-ovf.html.md +++ b/website/source/docs/builders/virtualbox-ovf.html.md @@ -1,11 +1,11 @@ --- +description: | + This VirtualBox Packer builder is able to create VirtualBox virtual machines + and export them in the OVF format, starting from an existing OVF/OVA (exported + virtual machine image). layout: docs -sidebar_current: docs-builders-virtualbox-ovf -page_title: VirtualBox OVF/OVA - Builders -description: |- - This VirtualBox Packer builder is able to create VirtualBox virtual machines - and export them in the OVF format, starting from an existing OVF/OVA (exported - virtual machine image). +page_title: 'VirtualBox OVF/OVA - Builders' +sidebar_current: 'docs-builders-virtualbox-ovf' --- # VirtualBox Builder (from an OVF/OVA) @@ -20,13 +20,11 @@ image). When exporting from VirtualBox make sure to choose OVF Version 2, since Version 1 is not compatible and will generate errors like this: -``` -==> virtualbox-ovf: Progress state: VBOX_E_FILE_ERROR -==> virtualbox-ovf: VBoxManage: error: Appliance read failed -==> virtualbox-ovf: VBoxManage: error: Error reading "source.ova": element "Section" has no "type" attribute, line 21 -==> virtualbox-ovf: VBoxManage: error: Details: code VBOX_E_FILE_ERROR (0x80bb0004), component Appliance, interface IAppliance -==> virtualbox-ovf: VBoxManage: error: Context: "int handleImportAppliance(HandlerArg*)" at line 304 of file VBoxManageAppliance.cpp -``` + ==> virtualbox-ovf: Progress state: VBOX_E_FILE_ERROR + ==> virtualbox-ovf: VBoxManage: error: Appliance read failed + ==> virtualbox-ovf: VBoxManage: error: Error reading "source.ova": element "Section" has no "type" attribute, line 21 + ==> virtualbox-ovf: VBoxManage: error: Details: code VBOX_E_FILE_ERROR (0x80bb0004), component Appliance, interface IAppliance + ==> virtualbox-ovf: VBoxManage: error: Context: "int handleImportAppliance(HandlerArg*)" at line 304 of file VBoxManageAppliance.cpp The builder builds a virtual machine by importing an existing OVF or OVA file. It then boots this image, runs provisioners on this new VM, and exports that VM @@ -38,7 +36,7 @@ build. Here is a basic example. This example is functional if you have an OVF matching the settings here. -```json +``` json { "type": "virtualbox-ovf", "source_path": "source.ovf", @@ -64,43 +62,43 @@ builder. ### Required: -- `source_path` (string) - The path to an OVF or OVA file that acts as the +- `source_path` (string) - The path to an OVF or OVA file that acts as the source of this build. It can also be a URL. -- `ssh_username` (string) - The username to use to SSH into the machine once +- `ssh_username` (string) - The username to use to SSH into the machine once the OS is installed. ### Optional: -- `boot_command` (array of strings) - This is an array of commands to type +- `boot_command` (array of strings) - This is an array of commands to type when the virtual machine is first booted. The goal of these commands should be to type just enough to initialize the operating system installer. Special keys can be typed as well, and are covered in the section below on the boot command. If this is not specified, it is assumed the installer will start itself. -- `boot_wait` (string) - The time to wait after booting the initial virtual +- `boot_wait` (string) - The time to wait after booting the initial virtual machine before typing the `boot_command`. The value of this should be a duration. Examples are "5s" and "1m30s" which will cause Packer to wait five seconds and one minute 30 seconds, respectively. If this isn't specified, the default is 10 seconds. -- `checksum` (string) - The checksum for the OVA file. The type of the +- `checksum` (string) - The checksum for the OVA file. The type of the checksum is specified with `checksum_type`, documented below. -- `checksum_type` (string) - The type of the checksum specified in `checksum`. +- `checksum_type` (string) - The type of the checksum specified in `checksum`. Valid values are "none", "md5", "sha1", "sha256", or "sha512". Although the checksum will not be verified when `checksum_type` is set to "none", this is not recommended since OVA files can be very large and corruption does happen from time to time. -- `export_opts` (array of strings) - Additional options to pass to the +- `export_opts` (array of strings) - Additional options to pass to the [VBoxManage export](https://www.virtualbox.org/manual/ch08.html#vboxmanage-export). This can be useful for passing product information to include in the resulting appliance file. Packer JSON configuration file example: - ```json + ``` json { "type": "virtualbox-ovf", "export_opts": @@ -136,7 +134,7 @@ builder. "packer_conf.json" ``` -- `floppy_files` (array of strings) - A list of files to place onto a floppy +- `floppy_files` (array of strings) - A list of files to place onto a floppy disk that is attached when the VM is booted. This is most useful for unattended Windows installs, which look for an `Autounattend.xml` file on removable media. By default, no floppy will be attached. All files listed in @@ -146,16 +144,16 @@ builder. and \[\]) are allowed. Directory names are also allowed, which will add all the files found in the directory to the floppy. -- `floppy_dirs` (array of strings) - A list of directories to place onto the +- `floppy_dirs` (array of strings) - A list of directories to place onto the floppy disk recursively. This is similar to the `floppy_files` option except that the directory structure is preserved. This is useful for when your floppy disk includes drivers or if you just want to organize it's contents as a hierarchy. Wildcard characters (\*, ?, and \[\]) are allowed. -- `format` (string) - Either "ovf" or "ova", this specifies the output format +- `format` (string) - Either "ovf" or "ova", this specifies the output format of the exported virtual machine. This defaults to "ovf". -- `guest_additions_mode` (string) - The method by which guest additions are +- `guest_additions_mode` (string) - The method by which guest additions are made available to the guest for installation. Valid options are "upload", "attach", or "disable". If the mode is "attach" the guest additions ISO will be attached as a CD device to the virtual machine. If the mode is "upload" @@ -163,63 +161,63 @@ builder. `guest_additions_path`. The default value is "upload". If "disable" is used, guest additions won't be downloaded, either. -- `guest_additions_path` (string) - The path on the guest virtual machine +- `guest_additions_path` (string) - The path on the guest virtual machine where the VirtualBox guest additions ISO will be uploaded. By default this is "VBoxGuestAdditions.iso" which should upload into the login directory of the user. This is a [configuration template](/docs/templates/engine.html) where the `Version` variable is replaced with the VirtualBox version. -- `guest_additions_sha256` (string) - The SHA256 checksum of the guest +- `guest_additions_sha256` (string) - The SHA256 checksum of the guest additions ISO that will be uploaded to the guest VM. By default the checksums will be downloaded from the VirtualBox website, so this only needs to be set if you want to be explicit about the checksum. -- `guest_additions_url` (string) - The URL to the guest additions ISO +- `guest_additions_url` (string) - The URL to the guest additions ISO to upload. This can also be a file URL if the ISO is at a local path. By default the VirtualBox builder will go and download the proper guest additions ISO from the internet. -- `headless` (boolean) - Packer defaults to building VirtualBox virtual +- `headless` (boolean) - Packer defaults to building VirtualBox virtual machines by launching a GUI that shows the console of the machine being built. When this value is set to true, the machine will start without a console. -- `http_directory` (string) - Path to a directory to serve using an +- `http_directory` (string) - Path to a directory to serve using an HTTP server. The files in this directory will be available over HTTP that will be requestable from the virtual machine. This is useful for hosting kickstart files and so on. By default this is "", which means no HTTP server will be started. The address and port of the HTTP server will be available as variables in `boot_command`. This is covered in more detail below. -- `http_port_min` and `http_port_max` (integer) - These are the minimum and +- `http_port_min` and `http_port_max` (integer) - These are the minimum and maximum port to use for the HTTP server started to serve the `http_directory`. Because Packer often runs in parallel, Packer will choose a randomly available port in this range to run the HTTP server. If you want to force the HTTP server to be on one port, make this minimum and maximum port the same. By default the values are 8000 and 9000, respectively. -- `import_flags` (array of strings) - Additional flags to pass to +- `import_flags` (array of strings) - Additional flags to pass to `VBoxManage import`. This can be used to add additional command-line flags such as `--eula-accept` to accept a EULA in the OVF. -- `import_opts` (string) - Additional options to pass to the +- `import_opts` (string) - Additional options to pass to the `VBoxManage import`. This can be useful for passing "keepallmacs" or "keepnatmacs" options for existing ovf images. -- `output_directory` (string) - This is the path to the directory where the +- `output_directory` (string) - This is the path to the directory where the resulting virtual machine will be created. This may be relative or absolute. If relative, the path is relative to the working directory when `packer` is executed. This directory must not exist or be empty prior to running the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the name of the build. -- `post_shutdown_delay` (string) - The amount of time to wait after shutting +- `post_shutdown_delay` (string) - The amount of time to wait after shutting down the virtual machine. If you get the error `Error removing floppy controller`, you might need to set this to `5m` or so. By default, the delay is `0s`, or disabled. -- `shutdown_command` (string) - The command to use to gracefully shut down the +- `shutdown_command` (string) - The command to use to gracefully shut down the machine once all the provisioning is done. By default this is an empty string, which tells Packer to just forcefully shut down the machine unless a shutdown command takes place inside script so this may safely be omitted. If @@ -227,30 +225,30 @@ builder. since reboots may fail and specify the final shutdown command in your last script. -- `shutdown_timeout` (string) - The amount of time to wait after executing the +- `shutdown_timeout` (string) - The amount of time to wait after executing the `shutdown_command` for the virtual machine to actually shut down. If it doesn't shut down in this time, it is an error. By default, the timeout is "5m", or five minutes. -- `skip_export` (boolean) - Defaults to `false`. When enabled, Packer will +- `skip_export` (boolean) - Defaults to `false`. When enabled, Packer will not export the VM. Useful if the build output is not the resultant image, but created inside the VM. -- `ssh_host_port_min` and `ssh_host_port_max` (integer) - The minimum and +- `ssh_host_port_min` and `ssh_host_port_max` (integer) - The minimum and maximum port to use for the SSH port on the host machine which is forwarded to the SSH port on the guest machine. Because Packer often runs in parallel, Packer will choose a randomly available port in this range to use as the host port. -- `ssh_skip_nat_mapping` (boolean) - Defaults to false. When enabled, Packer +- `ssh_skip_nat_mapping` (boolean) - Defaults to false. When enabled, Packer does not setup forwarded port mapping for SSH requests and uses `ssh_port` on the host to communicate to the virtual machine -- `target_path` (string) - The path where the OVA should be saved +- `target_path` (string) - The path where the OVA should be saved after download. By default, it will go in the packer cache, with a hash of the original filename as its name. -- `vboxmanage` (array of array of strings) - Custom `VBoxManage` commands to +- `vboxmanage` (array of array of strings) - Custom `VBoxManage` commands to execute in order to further customize the virtual machine being created. The value of this is an array of commands to execute. The commands are executed in the order defined in the template. For each command, the command is @@ -261,26 +259,26 @@ builder. variable is replaced with the VM name. More details on how to use `VBoxManage` are below. -- `vboxmanage_post` (array of array of strings) - Identical to `vboxmanage`, +- `vboxmanage_post` (array of array of strings) - Identical to `vboxmanage`, except that it is run after the virtual machine is shutdown, and before the virtual machine is exported. -- `virtualbox_version_file` (string) - The path within the virtual machine to +- `virtualbox_version_file` (string) - The path within the virtual machine to upload a file that contains the VirtualBox version that was used to create the machine. This information can be useful for provisioning. By default this is ".vbox\_version", which will generally be upload it into the home directory. Set to an empty string to skip uploading this file, which can be useful when using the `none` communicator. -- `vm_name` (string) - This is the name of the virtual machine when it is +- `vm_name` (string) - This is the name of the virtual machine when it is imported as well as the name of the OVF file when the virtual machine is exported. By default this is "packer-BUILDNAME", where "BUILDNAME" is the name of the build. -- `vrdp_bind_address` (string / IP address) - The IP address that should be +- `vrdp_bind_address` (string / IP address) - The IP address that should be binded to for VRDP. By default packer will use 127.0.0.1 for this. -- `vrdp_port_min` and `vrdp_port_max` (integer) - The minimum and maximum port +- `vrdp_port_min` and `vrdp_port_max` (integer) - The minimum and maximum port to use for VRDP access to the virtual machine. Packer uses a randomly chosen port in this range that appears available. By default this is 5900 to 6000. The minimum and maximum ports are inclusive. @@ -300,49 +298,49 @@ machine, simulating a human actually typing the keyboard. There are a set of special keys available. If these are in your boot command, they will be replaced by the proper key: -- `` - Backspace +- `` - Backspace -- `` - Delete +- `` - Delete -- `` and `` - Simulates an actual "enter" or "return" keypress. +- `` and `` - Simulates an actual "enter" or "return" keypress. -- `` - Simulates pressing the escape key. +- `` - Simulates pressing the escape key. -- `` - Simulates pressing the tab key. +- `` - Simulates pressing the tab key. -- `` - `` - Simulates pressing a function key. +- `` - `` - Simulates pressing a function key. -- `` `` `` `` - Simulates pressing an arrow key. +- `` `` `` `` - Simulates pressing an arrow key. -- `` - Simulates pressing the spacebar. +- `` - Simulates pressing the spacebar. -- `` - Simulates pressing the insert key. +- `` - Simulates pressing the insert key. -- `` `` - Simulates pressing the home and end keys. +- `` `` - Simulates pressing the home and end keys. -- `` `` - Simulates pressing the page up and page down keys. +- `` `` - Simulates pressing the page up and page down keys. -- `` `` - Simulates pressing the alt key. +- `` `` - Simulates pressing the alt key. -- `` `` - Simulates pressing the ctrl key. +- `` `` - Simulates pressing the ctrl key. -- `` `` - Simulates pressing the shift key. +- `` `` - Simulates pressing the shift key. -- `` `` - Simulates pressing and holding the alt key. +- `` `` - Simulates pressing and holding the alt key. -- `` `` - Simulates pressing and holding the +- `` `` - Simulates pressing and holding the ctrl key. -- `` `` - Simulates pressing and holding the +- `` `` - Simulates pressing and holding the shift key. -- `` `` - Simulates releasing a held alt key. +- `` `` - Simulates releasing a held alt key. -- `` `` - Simulates releasing a held ctrl key. +- `` `` - Simulates releasing a held ctrl key. -- `` `` - Simulates releasing a held shift key. +- `` `` - Simulates releasing a held shift key. -- `` `` `` - Adds a 1, 5 or 10 second pause before +- `` `` `` - Adds a 1, 5 or 10 second pause before sending any additional keys. This is useful if you have to generally wait for the UI to update before typing more. @@ -350,7 +348,7 @@ In addition to the special keys, each command to type is treated as a [template engine](/docs/templates/engine.html). The available variables are: -- `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server +- `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server that is started serving the directory specified by the `http_directory` configuration parameter. If `http_directory` isn't specified, these will be blank! @@ -358,7 +356,7 @@ available variables are: Example boot command. This is actually a working boot command used to start an Ubuntu 12.04 installer: -```text +``` text [ "", "/install/vmlinuz noapic ", @@ -399,7 +397,7 @@ Extra VBoxManage commands are defined in the template in the `vboxmanage` section. An example is shown below that sets the memory and number of CPUs within the virtual machine: -```json +``` json { "vboxmanage": [ ["modifyvm", "{{.Name}}", "--memory", "1024"], diff --git a/website/source/docs/builders/virtualbox.html.md b/website/source/docs/builders/virtualbox.html.md index b2cdee3fb..470671973 100644 --- a/website/source/docs/builders/virtualbox.html.md +++ b/website/source/docs/builders/virtualbox.html.md @@ -1,10 +1,10 @@ --- +description: | + The VirtualBox Packer builder is able to create VirtualBox virtual machines + and export them in the OVA or OVF format. layout: docs -sidebar_current: docs-builders-virtualbox -page_title: VirtualBox - Builders -description: |- - The VirtualBox Packer builder is able to create VirtualBox virtual machines - and export them in the OVA or OVF format. +page_title: 'VirtualBox - Builders' +sidebar_current: 'docs-builders-virtualbox' --- # VirtualBox Builder @@ -17,13 +17,13 @@ Packer actually comes with multiple builders able to create VirtualBox machines, depending on the strategy you want to use to build the image. Packer supports the following VirtualBox builders: -- [virtualbox-iso](/docs/builders/virtualbox-iso.html) - Starts from an ISO - file, creates a brand new VirtualBox VM, installs an OS, provisions software - within the OS, then exports that machine to create an image. This is best for - people who want to start from scratch. +- [virtualbox-iso](/docs/builders/virtualbox-iso.html) - Starts from an ISO + file, creates a brand new VirtualBox VM, installs an OS, provisions software + within the OS, then exports that machine to create an image. This is best for + people who want to start from scratch. -- [virtualbox-ovf](/docs/builders/virtualbox-ovf.html) - This builder imports an - existing OVF/OVA file, runs provisioners on top of that VM, and exports that - machine to create an image. This is best if you have an existing VirtualBox VM - export you want to use as the source. As an additional benefit, you can feed - the artifact of this builder back into itself to iterate on a machine. +- [virtualbox-ovf](/docs/builders/virtualbox-ovf.html) - This builder imports an + existing OVF/OVA file, runs provisioners on top of that VM, and exports that + machine to create an image. This is best if you have an existing VirtualBox VM + export you want to use as the source. As an additional benefit, you can feed + the artifact of this builder back into itself to iterate on a machine. diff --git a/website/source/docs/builders/vmware-iso.html.md b/website/source/docs/builders/vmware-iso.html.md index ad02e7aae..4098182de 100644 --- a/website/source/docs/builders/vmware-iso.html.md +++ b/website/source/docs/builders/vmware-iso.html.md @@ -1,13 +1,13 @@ --- +description: | + This VMware Packer builder is able to create VMware virtual machines from an + ISO file as a source. It currently supports building virtual machines on hosts + running VMware Fusion for OS X, VMware Workstation for Linux and Windows, and + VMware Player on Linux. It can also build machines directly on VMware vSphere + Hypervisor using SSH as opposed to the vSphere API. layout: docs -sidebar_current: docs-builders-vmware-iso -page_title: VMware ISO - Builders -description: |- - This VMware Packer builder is able to create VMware virtual machines from an - ISO file as a source. It currently supports building virtual machines on hosts - running VMware Fusion for OS X, VMware Workstation for Linux and Windows, and - VMware Player on Linux. It can also build machines directly on VMware vSphere - Hypervisor using SSH as opposed to the vSphere API. +page_title: 'VMware ISO - Builders' +sidebar_current: 'docs-builders-vmware-iso' --- # VMware Builder (from ISO) @@ -35,7 +35,7 @@ Here is a basic example. This example is not functional. It will start the OS installer but then fail because we don't provide the preseed file for Ubuntu to self-install. Still, the example serves to show the basic configuration: -```json +``` json { "type": "vmware-iso", "iso_url": "http://old-releases.ubuntu.com/releases/precise/ubuntu-12.04.2-server-amd64.iso", @@ -58,65 +58,65 @@ builder. ### Required: -- `iso_checksum` (string) - The checksum for the OS ISO file. Because ISO +- `iso_checksum` (string) - The checksum for the OS ISO file. Because ISO files are so large, this is required and Packer will verify it prior to booting a virtual machine with the ISO attached. The type of the checksum is specified with `iso_checksum_type`, documented below. At least one of `iso_checksum` and `iso_checksum_url` must be defined. This has precedence over `iso_checksum_url` type. -- `iso_checksum_type` (string) - The type of the checksum specified in +- `iso_checksum_type` (string) - The type of the checksum specified in `iso_checksum`. Valid values are "none", "md5", "sha1", "sha256", or "sha512" currently. While "none" will skip checksumming, this is not recommended since ISO files are generally large and corruption does happen from time to time. -- `iso_checksum_url` (string) - A URL to a GNU or BSD style checksum file +- `iso_checksum_url` (string) - A URL to a GNU or BSD style checksum file containing a checksum for the OS ISO file. At least one of `iso_checksum` and `iso_checksum_url` must be defined. This will be ignored if `iso_checksum` is non empty. -- `iso_url` (string) - A URL to the ISO containing the installation image. +- `iso_url` (string) - A URL to the ISO containing the installation image. This URL can be either an HTTP URL or a file URL (or path to a file). If this is an HTTP URL, Packer will download it and cache it between runs. -- `ssh_username` (string) - The username to use to SSH into the machine once +- `ssh_username` (string) - The username to use to SSH into the machine once the OS is installed. ### Optional: -- `boot_command` (array of strings) - This is an array of commands to type +- `boot_command` (array of strings) - This is an array of commands to type when the virtual machine is first booted. The goal of these commands should be to type just enough to initialize the operating system installer. Special keys can be typed as well, and are covered in the section below on the boot command. If this is not specified, it is assumed the installer will start itself. -- `boot_wait` (string) - The time to wait after booting the initial virtual +- `boot_wait` (string) - The time to wait after booting the initial virtual machine before typing the `boot_command`. The value of this should be a duration. Examples are "5s" and "1m30s" which will cause Packer to wait five seconds and one minute 30 seconds, respectively. If this isn't specified, the default is 10 seconds. -- `disk_additional_size` (array of integers) - The size(s) of any additional +- `disk_additional_size` (array of integers) - The size(s) of any additional hard disks for the VM in megabytes. If this is not specified then the VM will only contain a primary hard disk. The builder uses expandable, not fixed-size virtual hard disks, so the actual file representing the disk will not use the full size unless it is full. -- `disk_size` (integer) - The size of the hard disk for the VM in megabytes. +- `disk_size` (integer) - The size of the hard disk for the VM in megabytes. The builder uses expandable, not fixed-size virtual hard disks, so the actual file representing the disk will not use the full size unless it is full. By default this is set to 40,000 (about 40 GB). -- `disk_type_id` (string) - The type of VMware virtual disk to create. The +- `disk_type_id` (string) - The type of VMware virtual disk to create. The default is "1", which corresponds to a growable virtual disk split in 2GB files. This option is for advanced usage, modify only if you know what you're doing. For more information, please consult the [Virtual Disk Manager User's Guide](https://www.vmware.com/pdf/VirtualDiskManager.pdf) for desktop VMware clients. For ESXi, refer to the proper ESXi documentation. -- `floppy_files` (array of strings) - A list of files to place onto a floppy +- `floppy_files` (array of strings) - A list of files to place onto a floppy disk that is attached when the VM is booted. This is most useful for unattended Windows installs, which look for an `Autounattend.xml` file on removable media. By default, no floppy will be attached. All files listed in @@ -126,131 +126,131 @@ builder. and \[\]) are allowed. Directory names are also allowed, which will add all the files found in the directory to the floppy. -- `floppy_dirs` (array of strings) - A list of directories to place onto +- `floppy_dirs` (array of strings) - A list of directories to place onto the floppy disk recursively. This is similar to the `floppy_files` option except that the directory structure is preserved. This is useful for when your floppy disk includes drivers or if you just want to organize it's contents as a hierarchy. Wildcard characters (\*, ?, and \[\]) are allowed. -- `fusion_app_path` (string) - Path to "VMware Fusion.app". By default this is +- `fusion_app_path` (string) - Path to "VMware Fusion.app". By default this is "/Applications/VMware Fusion.app" but this setting allows you to customize this. -- `guest_os_type` (string) - The guest OS type being installed. This will be +- `guest_os_type` (string) - The guest OS type being installed. This will be set in the VMware VMX. By default this is "other". By specifying a more specific OS type, VMware may perform some optimizations or virtual hardware changes to better support the operating system running in the virtual machine. -- `headless` (boolean) - Packer defaults to building VMware virtual machines +- `headless` (boolean) - Packer defaults to building VMware virtual machines by launching a GUI that shows the console of the machine being built. When this value is set to true, the machine will start without a console. For VMware machines, Packer will output VNC connection information in case you need to connect to the console to debug the build process. -- `http_directory` (string) - Path to a directory to serve using an +- `http_directory` (string) - Path to a directory to serve using an HTTP server. The files in this directory will be available over HTTP that will be requestable from the virtual machine. This is useful for hosting kickstart files and so on. By default this is "", which means no HTTP server will be started. The address and port of the HTTP server will be available as variables in `boot_command`. This is covered in more detail below. -- `http_port_min` and `http_port_max` (integer) - These are the minimum and +- `http_port_min` and `http_port_max` (integer) - These are the minimum and maximum port to use for the HTTP server started to serve the `http_directory`. Because Packer often runs in parallel, Packer will choose a randomly available port in this range to run the HTTP server. If you want to force the HTTP server to be on one port, make this minimum and maximum port the same. By default the values are 8000 and 9000, respectively. -- `iso_target_extension` (string) - The extension of the iso file after +- `iso_target_extension` (string) - The extension of the iso file after download. This defaults to "iso". -- `iso_target_path` (string) - The path where the iso should be saved after +- `iso_target_path` (string) - The path where the iso should be saved after download. By default will go in the packer cache, with a hash of the original filename as its name. -- `iso_urls` (array of strings) - Multiple URLs for the ISO to download. +- `iso_urls` (array of strings) - Multiple URLs for the ISO to download. Packer will try these in order. If anything goes wrong attempting to download or while downloading a single URL, it will move on to the next. All URLs must point to the same file (same checksum). By default this is empty and `iso_url` is used. Only one of `iso_url` or `iso_urls` can be specified. -- `output_directory` (string) - This is the path to the directory where the +- `output_directory` (string) - This is the path to the directory where the resulting virtual machine will be created. This may be relative or absolute. If relative, the path is relative to the working directory when `packer` is executed. This directory must not exist or be empty prior to running the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the name of the build. -- `remote_cache_datastore` (string) - The path to the datastore where +- `remote_cache_datastore` (string) - The path to the datastore where supporting files will be stored during the build on the remote machine. By default this is the same as the `remote_datastore` option. This only has an effect if `remote_type` is enabled. -- `remote_cache_directory` (string) - The path where the ISO and/or floppy +- `remote_cache_directory` (string) - The path where the ISO and/or floppy files will be stored during the build on the remote machine. The path is relative to the `remote_cache_datastore` on the remote machine. By default this is "packer\_cache". This only has an effect if `remote_type` is enabled. -- `remote_datastore` (string) - The path to the datastore where the resulting +- `remote_datastore` (string) - The path to the datastore where the resulting VM will be stored when it is built on the remote machine. By default this is "datastore1". This only has an effect if `remote_type` is enabled. -- `remote_host` (string) - The host of the remote machine used for access. +- `remote_host` (string) - The host of the remote machine used for access. This is only required if `remote_type` is enabled. -- `remote_password` (string) - The SSH password for the user used to access +- `remote_password` (string) - The SSH password for the user used to access the remote machine. By default this is empty. This only has an effect if `remote_type` is enabled. -- `remote_private_key_file` (string) - The path to the PEM encoded private key +- `remote_private_key_file` (string) - The path to the PEM encoded private key file for the user used to access the remote machine. By default this is empty. This only has an effect if `remote_type` is enabled. -- `remote_type` (string) - The type of remote machine that will be used to +- `remote_type` (string) - The type of remote machine that will be used to build this VM rather than a local desktop product. The only value accepted for this currently is "esx5". If this is not set, a desktop product will be used. By default, this is not set. -- `remote_username` (string) - The username for the SSH user that will access +- `remote_username` (string) - The username for the SSH user that will access the remote machine. This is required if `remote_type` is enabled. -- `shutdown_command` (string) - The command to use to gracefully shut down the +- `shutdown_command` (string) - The command to use to gracefully shut down the machine once all the provisioning is done. By default this is an empty string, which tells Packer to just forcefully shut down the machine. -- `shutdown_timeout` (string) - The amount of time to wait after executing the +- `shutdown_timeout` (string) - The amount of time to wait after executing the `shutdown_command` for the virtual machine to actually shut down. If it doesn't shut down in this time, it is an error. By default, the timeout is "5m", or five minutes. -- `skip_compaction` (boolean) - VMware-created disks are defragmented and +- `skip_compaction` (boolean) - VMware-created disks are defragmented and compacted at the end of the build process using `vmware-vdiskmanager`. In certain rare cases, this might actually end up making the resulting disks slightly larger. If you find this to be the case, you can disable compaction - using this configuration value. Defaults to `false`. + using this configuration value. Defaults to `false`. -- `skip_export` (boolean) - Defaults to `false`. When enabled, Packer will +- `skip_export` (boolean) - Defaults to `false`. When enabled, Packer will not export the VM. Useful if the build output is not the resultant image, but created inside the VM. -- `keep_registered` (boolean) - Set this to `true` if you would like to keep +- `keep_registered` (boolean) - Set this to `true` if you would like to keep the VM registered with the remote ESXi server. This is convenient if you use packer to provision VMs on ESXi and don't want to use ovftool to deploy the resulting artifact (VMX or OVA or whatever you used as `format`). Defaults to `false`. -- `ovftool_options` (array of strings) - Extra options to pass to ovftool +- `ovftool_options` (array of strings) - Extra options to pass to ovftool during export. Each item in the array is a new argument. The options `--noSSLVerify`, `--skipManifestCheck`, and `--targetType` are reserved, and should not be passed to this argument. -- `tools_upload_flavor` (string) - The flavor of the VMware Tools ISO to +- `tools_upload_flavor` (string) - The flavor of the VMware Tools ISO to upload into the VM. Valid values are "darwin", "linux", and "windows". By default, this is empty, which means VMware tools won't be uploaded. -- `tools_upload_path` (string) - The path in the VM to upload the +- `tools_upload_path` (string) - The path in the VM to upload the VMware tools. This only takes effect if `tools_upload_flavor` is non-empty. This is a [configuration template](/docs/templates/engine.html) that has a single @@ -258,46 +258,46 @@ builder. By default the upload path is set to `{{.Flavor}}.iso`. This setting is not used when `remote_type` is "esx5". -- `version` (string) - The [vmx hardware +- `version` (string) - The [vmx hardware version](http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003746) for the new virtual machine. Only the default value has been tested, any other value is experimental. Default value is '9'. -- `vm_name` (string) - This is the name of the VMX file for the new virtual +- `vm_name` (string) - This is the name of the VMX file for the new virtual machine, without the file extension. By default this is "packer-BUILDNAME", where "BUILDNAME" is the name of the build. -- `vmdk_name` (string) - The filename of the virtual disk that'll be created, +- `vmdk_name` (string) - The filename of the virtual disk that'll be created, without the extension. This defaults to "packer". -- `vmx_data` (object of key/value strings) - Arbitrary key/values to enter +- `vmx_data` (object of key/value strings) - Arbitrary key/values to enter into the virtual machine VMX file. This is for advanced users who want to set properties such as memory, CPU, etc. -- `vmx_data_post` (object of key/value strings) - Identical to `vmx_data`, +- `vmx_data_post` (object of key/value strings) - Identical to `vmx_data`, except that it is run after the virtual machine is shutdown, and before the virtual machine is exported. -- `vmx_remove_ethernet_interfaces` (boolean) - Remove all ethernet interfaces from +- `vmx_remove_ethernet_interfaces` (boolean) - Remove all ethernet interfaces from the VMX file after building. This is for advanced users who understand the ramifications, but is useful for building Vagrant boxes since Vagrant will create ethernet interfaces when provisioning a box. -- `vmx_template_path` (string) - Path to a [configuration +- `vmx_template_path` (string) - Path to a [configuration template](/docs/templates/engine.html) that defines the contents of the virtual machine VMX file for VMware. This is for **advanced users only** as this can render the virtual machine non-functional. See below for more information. For basic VMX modifications, try `vmx_data` first. -- `vnc_bind_address` (string / IP address) - The IP address that should be binded +- `vnc_bind_address` (string / IP address) - The IP address that should be binded to for VNC. By default packer will use 127.0.0.1 for this. If you wish to bind to all interfaces use 0.0.0.0 -- `vnc_disable_password` (boolean) - Don't auto-generate a VNC password that is +- `vnc_disable_password` (boolean) - Don't auto-generate a VNC password that is used to secure the VNC communication with the VM. -- `vnc_port_min` and `vnc_port_max` (integer) - The minimum and maximum port +- `vnc_port_min` and `vnc_port_max` (integer) - The minimum and maximum port to use for VNC access to the virtual machine. The builder uses VNC to type the initial `boot_command`. Because Packer generally runs in parallel, Packer uses a randomly chosen port in this range that appears available. By @@ -317,55 +317,55 @@ template. The boot command is "typed" character for character over a VNC connection to the machine, simulating a human actually typing the keyboard. --> Keystrokes are typed as separate key up/down events over VNC with a - default 100ms delay. The delay alleviates issues with latency and CPU - contention. For local builds you can tune this delay by specifying - e.g. `PACKER_KEY_INTERVAL=10ms` to speed through the boot command. +-> Keystrokes are typed as separate key up/down events over VNC with a +default 100ms delay. The delay alleviates issues with latency and CPU +contention. For local builds you can tune this delay by specifying +e.g. `PACKER_KEY_INTERVAL=10ms` to speed through the boot command. There are a set of special keys available. If these are in your boot command, they will be replaced by the proper key: -- `` - Backspace +- `` - Backspace -- `` - Delete +- `` - Delete -- `` and `` - Simulates an actual "enter" or "return" keypress. +- `` and `` - Simulates an actual "enter" or "return" keypress. -- `` - Simulates pressing the escape key. +- `` - Simulates pressing the escape key. -- `` - Simulates pressing the tab key. +- `` - Simulates pressing the tab key. -- `` - `` - Simulates pressing a function key. +- `` - `` - Simulates pressing a function key. -- `` `` `` `` - Simulates pressing an arrow key. +- `` `` `` `` - Simulates pressing an arrow key. -- `` - Simulates pressing the spacebar. +- `` - Simulates pressing the spacebar. -- `` - Simulates pressing the insert key. +- `` - Simulates pressing the insert key. -- `` `` - Simulates pressing the home and end keys. +- `` `` - Simulates pressing the home and end keys. -- `` `` - Simulates pressing the page up and page down keys. +- `` `` - Simulates pressing the page up and page down keys. -- `` `` - Simulates pressing the alt key. +- `` `` - Simulates pressing the alt key. -- `` `` - Simulates pressing the ctrl key. +- `` `` - Simulates pressing the ctrl key. -- `` `` - Simulates pressing the shift key. +- `` `` - Simulates pressing the shift key. -- `` `` - Simulates pressing and holding the alt key. +- `` `` - Simulates pressing and holding the alt key. -- `` `` - Simulates pressing and holding the ctrl key. +- `` `` - Simulates pressing and holding the ctrl key. -- `` `` - Simulates pressing and holding the shift key. +- `` `` - Simulates pressing and holding the shift key. -- `` `` - Simulates releasing a held alt key. +- `` `` - Simulates releasing a held alt key. -- `` `` - Simulates releasing a held ctrl key. +- `` `` - Simulates releasing a held ctrl key. -- `` `` - Simulates releasing a held shift key. +- `` `` - Simulates releasing a held shift key. -- `` `` `` - Adds a 1, 5 or 10 second pause before +- `` `` `` - Adds a 1, 5 or 10 second pause before sending any additional keys. This is useful if you have to generally wait for the UI to update before typing more. @@ -379,7 +379,7 @@ In addition to the special keys, each command to type is treated as a [template engine](/docs/templates/engine.html). The available variables are: -- `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server +- `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server that is started serving the directory specified by the `http_directory` configuration parameter. If `http_directory` isn't specified, these will be blank! @@ -387,7 +387,7 @@ available variables are: Example boot command. This is actually a working boot command used to start an Ubuntu 12.04 installer: -```text +``` text [ "", "/install/vmlinuz noapic ", @@ -410,7 +410,7 @@ file](https://github.com/hashicorp/packer/blob/20541a7eda085aa5cf35bfed5069592ca But for advanced users, this template can be customized. This allows Packer to build virtual machines of effectively any guest operating system type. -~> **This is an advanced feature.** Modifying the VMX template can easily +~> **This is an advanced feature.** Modifying the VMX template can easily cause your virtual machine to not boot properly. Please only modify the template if you know what you're doing. @@ -418,11 +418,11 @@ Within the template, a handful of variables are available so that your template can continue working with the rest of the Packer machinery. Using these variables isn't required, however. -- `Name` - The name of the virtual machine. -- `GuestOS` - The VMware-valid guest OS type. -- `DiskName` - The filename (without the suffix) of the main virtual disk. -- `ISOPath` - The path to the ISO to use for the OS installation. -- `Version` - The Hardware version VMWare will execute this vm under. Also +- `Name` - The name of the virtual machine. +- `GuestOS` - The VMware-valid guest OS type. +- `DiskName` - The filename (without the suffix) of the main virtual disk. +- `ISOPath` - The path to the ISO to use for the OS installation. +- `Version` - The Hardware version VMWare will execute this vm under. Also known as the `virtualhw.version`. ## Building on a Remote vSphere Hypervisor @@ -431,12 +431,12 @@ In addition to using the desktop products of VMware locally to build virtual machines, Packer can use a remote VMware Hypervisor to build the virtual machine. --> **Note:** Packer supports ESXi 5.1 and above. +-> **Note:** Packer supports ESXi 5.1 and above. Before using a remote vSphere Hypervisor, you need to enable GuestIPHack by running the following command: -```text +``` text esxcli system settings advanced set -o /Net/GuestIPHack -i 1 ``` @@ -453,36 +453,35 @@ connections. To use a remote VMware vSphere Hypervisor to build your virtual machine, fill in the required `remote_*` configurations: -- `remote_type` - This must be set to "esx5". +- `remote_type` - This must be set to "esx5". -- `remote_host` - The host of the remote machine. +- `remote_host` - The host of the remote machine. Additionally, there are some optional configurations that you'll likely have to modify as well: -- `remote_port` - The SSH port of the remote machine +- `remote_port` - The SSH port of the remote machine -- `remote_datastore` - The path to the datastore where the VM will be stored +- `remote_datastore` - The path to the datastore where the VM will be stored on the ESXi machine. -- `remote_cache_datastore` - The path to the datastore where supporting files +- `remote_cache_datastore` - The path to the datastore where supporting files will be stored during the build on the remote machine. -- `remote_cache_directory` - The path where the ISO and/or floppy files will +- `remote_cache_directory` - The path where the ISO and/or floppy files will be stored during the build on the remote machine. The path is relative to the `remote_cache_datastore` on the remote machine. -- `remote_username` - The SSH username used to access the remote machine. +- `remote_username` - The SSH username used to access the remote machine. -- `remote_password` - The SSH password for access to the remote machine. +- `remote_password` - The SSH password for access to the remote machine. -- `remote_private_key_file` - The SSH key for access to the remote machine. +- `remote_private_key_file` - The SSH key for access to the remote machine. -- `format` (string) - Either "ovf", "ova" or "vmx", this specifies the output +- `format` (string) - Either "ovf", "ova" or "vmx", this specifies the output format of the exported virtual machine. This defaults to "ovf". Before using this option, you need to install `ovftool`. - ### VNC port discovery Packer needs to decide on a port to use for VNC when building remotely. To find @@ -503,7 +502,7 @@ Depending on your network configuration, it may be difficult to use packer's built-in HTTP server with ESXi. Instead, you can provide a kickstart or preseed file by attaching a floppy disk. An example below, based on RHEL: -```json +``` json { "builders": [ { @@ -517,9 +516,9 @@ file by attaching a floppy disk. An example below, based on RHEL: } ``` -It's also worth noting that `ks=floppy` has been deprecated. Later versions of the Anaconda installer (used in RHEL/CentOS 7 and Fedora) may require a different syntax to source a kickstart file from a mounted floppy image. +It's also worth noting that `ks=floppy` has been deprecated. Later versions of the Anaconda installer (used in RHEL/CentOS 7 and Fedora) may require a different syntax to source a kickstart file from a mounted floppy image. -```json +``` json { "builders": [ { diff --git a/website/source/docs/builders/vmware-vmx.html.md b/website/source/docs/builders/vmware-vmx.html.md index 8c885c653..fb2fc2d86 100644 --- a/website/source/docs/builders/vmware-vmx.html.md +++ b/website/source/docs/builders/vmware-vmx.html.md @@ -1,12 +1,12 @@ --- +description: | + This VMware Packer builder is able to create VMware virtual machines from an + existing VMware virtual machine (a VMX file). It currently supports building + virtual machines on hosts running VMware Fusion Professional for OS X, VMware + Workstation for Linux and Windows, and VMware Player on Linux. layout: docs -sidebar_current: docs-builders-vmware-vmx -page_title: VMware VMX - Builders -description: |- - This VMware Packer builder is able to create VMware virtual machines from an - existing VMware virtual machine (a VMX file). It currently supports building - virtual machines on hosts running VMware Fusion Professional for OS X, VMware - Workstation for Linux and Windows, and VMware Player on Linux. +page_title: 'VMware VMX - Builders' +sidebar_current: 'docs-builders-vmware-vmx' --- # VMware Builder (from VMX) @@ -32,7 +32,7 @@ VMware virtual machine. Here is an example. This example is fully functional as long as the source path points to a real VMX file with the proper settings: -```json +``` json { "type": "vmware-vmx", "source_path": "/path/to/a/vm.vmx", @@ -54,27 +54,27 @@ builder. ### Required: -- `source_path` (string) - Path to the source VMX file to clone. +- `source_path` (string) - Path to the source VMX file to clone. -- `ssh_username` (string) - The username to use to SSH into the machine once +- `ssh_username` (string) - The username to use to SSH into the machine once the OS is installed. ### Optional: -- `boot_command` (array of strings) - This is an array of commands to type +- `boot_command` (array of strings) - This is an array of commands to type when the virtual machine is first booted. The goal of these commands should be to type just enough to initialize the operating system installer. Special keys can be typed as well, and are covered in the section below on the boot command. If this is not specified, it is assumed the installer will start itself. -- `boot_wait` (string) - The time to wait after booting the initial virtual +- `boot_wait` (string) - The time to wait after booting the initial virtual machine before typing the `boot_command`. The value of this should be a duration. Examples are "5s" and "1m30s" which will cause Packer to wait five seconds and one minute 30 seconds, respectively. If this isn't specified, the default is 10 seconds. -- `floppy_files` (array of strings) - A list of files to place onto a floppy +- `floppy_files` (array of strings) - A list of files to place onto a floppy disk that is attached when the VM is booted. This is most useful for unattended Windows installs, which look for an `Autounattend.xml` file on removable media. By default, no floppy will be attached. All files listed in @@ -84,44 +84,44 @@ builder. and \[\]) are allowed. Directory names are also allowed, which will add all the files found in the directory to the floppy. -- `floppy_dirs` (array of strings) - A list of directories to place onto +- `floppy_dirs` (array of strings) - A list of directories to place onto the floppy disk recursively. This is similar to the `floppy_files` option except that the directory structure is preserved. This is useful for when your floppy disk includes drivers or if you just want to organize it's contents as a hierarchy. Wildcard characters (\*, ?, and \[\]) are allowed. -- `fusion_app_path` (string) - Path to "VMware Fusion.app". By default this is +- `fusion_app_path` (string) - Path to "VMware Fusion.app". By default this is "/Applications/VMware Fusion.app" but this setting allows you to customize this. -- `headless` (boolean) - Packer defaults to building VMware virtual machines +- `headless` (boolean) - Packer defaults to building VMware virtual machines by launching a GUI that shows the console of the machine being built. When this value is set to true, the machine will start without a console. For VMware machines, Packer will output VNC connection information in case you need to connect to the console to debug the build process. -- `http_directory` (string) - Path to a directory to serve using an +- `http_directory` (string) - Path to a directory to serve using an HTTP server. The files in this directory will be available over HTTP that will be requestable from the virtual machine. This is useful for hosting kickstart files and so on. By default this is "", which means no HTTP server will be started. The address and port of the HTTP server will be available as variables in `boot_command`. This is covered in more detail below. -- `http_port_min` and `http_port_max` (integer) - These are the minimum and +- `http_port_min` and `http_port_max` (integer) - These are the minimum and maximum port to use for the HTTP server started to serve the `http_directory`. Because Packer often runs in parallel, Packer will choose a randomly available port in this range to run the HTTP server. If you want to force the HTTP server to be on one port, make this minimum and maximum port the same. By default the values are 8000 and 9000, respectively. -- `output_directory` (string) - This is the path to the directory where the +- `output_directory` (string) - This is the path to the directory where the resulting virtual machine will be created. This may be relative or absolute. If relative, the path is relative to the working directory when `packer` is executed. This directory must not exist or be empty prior to running the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the name of the build. -- `shutdown_command` (string) - The command to use to gracefully shut down the +- `shutdown_command` (string) - The command to use to gracefully shut down the machine once all the provisioning is done. By default this is an empty string, which tells Packer to just forcefully shut down the machine unless a shutdown command takes place inside script so this may safely be omitted. If @@ -129,52 +129,52 @@ builder. since reboots may fail and specify the final shutdown command in your last script. -- `shutdown_timeout` (string) - The amount of time to wait after executing the +- `shutdown_timeout` (string) - The amount of time to wait after executing the `shutdown_command` for the virtual machine to actually shut down. If it doesn't shut down in this time, it is an error. By default, the timeout is "5m", or five minutes. -- `skip_compaction` (boolean) - VMware-created disks are defragmented and +- `skip_compaction` (boolean) - VMware-created disks are defragmented and compacted at the end of the build process using `vmware-vdiskmanager`. In certain rare cases, this might actually end up making the resulting disks slightly larger. If you find this to be the case, you can disable compaction using this configuration value. -- `tools_upload_flavor` (string) - The flavor of the VMware Tools ISO to +- `tools_upload_flavor` (string) - The flavor of the VMware Tools ISO to upload into the VM. Valid values are "darwin", "linux", and "windows". By default, this is empty, which means VMware tools won't be uploaded. -- `tools_upload_path` (string) - The path in the VM to upload the +- `tools_upload_path` (string) - The path in the VM to upload the VMware tools. This only takes effect if `tools_upload_flavor` is non-empty. This is a [configuration template](/docs/templates/engine.html) that has a single valid variable: `Flavor`, which will be the value of `tools_upload_flavor`. By default the upload path is set to `{{.Flavor}}.iso`. -- `vm_name` (string) - This is the name of the VMX file for the new virtual +- `vm_name` (string) - This is the name of the VMX file for the new virtual machine, without the file extension. By default this is "packer-BUILDNAME", where "BUILDNAME" is the name of the build. -- `vmx_data` (object of key/value strings) - Arbitrary key/values to enter +- `vmx_data` (object of key/value strings) - Arbitrary key/values to enter into the virtual machine VMX file. This is for advanced users who want to set properties such as memory, CPU, etc. -- `vmx_data_post` (object of key/value strings) - Identical to `vmx_data`, +- `vmx_data_post` (object of key/value strings) - Identical to `vmx_data`, except that it is run after the virtual machine is shutdown, and before the virtual machine is exported. -- `vmx_remove_ethernet_interfaces` (boolean) - Remove all ethernet interfaces from +- `vmx_remove_ethernet_interfaces` (boolean) - Remove all ethernet interfaces from the VMX file after building. This is for advanced users who understand the ramifications, but is useful for building Vagrant boxes since Vagrant will create ethernet interfaces when provisioning a box. -- `vnc_bind_address` (string / IP address) - The IP address that should be binded - to for VNC. By default packer will use 127.0.0.1 for this. +- `vnc_bind_address` (string / IP address) - The IP address that should be binded + to for VNC. By default packer will use 127.0.0.1 for this. -- `vnc_disable_password` (boolean) - Don't auto-generate a VNC password that is +- `vnc_disable_password` (boolean) - Don't auto-generate a VNC password that is used to secure the VNC communication with the VM. -- `vnc_port_min` and `vnc_port_max` (integer) - The minimum and maximum port +- `vnc_port_min` and `vnc_port_max` (integer) - The minimum and maximum port to use for VNC access to the virtual machine. The builder uses VNC to type the initial `boot_command`. Because Packer generally runs in parallel, Packer uses a randomly chosen port in this range that appears available. By @@ -193,57 +193,57 @@ template. The boot command is "typed" character for character over a VNC connection to the machine, simulating a human actually typing the keyboard. --> Keystrokes are typed as separate key up/down events over VNC with a - default 100ms delay. The delay alleviates issues with latency and CPU - contention. For local builds you can tune this delay by specifying - e.g. `PACKER_KEY_INTERVAL=10ms` to speed through the boot command. +-> Keystrokes are typed as separate key up/down events over VNC with a +default 100ms delay. The delay alleviates issues with latency and CPU +contention. For local builds you can tune this delay by specifying +e.g. `PACKER_KEY_INTERVAL=10ms` to speed through the boot command. There are a set of special keys available. If these are in your boot command, they will be replaced by the proper key: -- `` - Backspace +- `` - Backspace -- `` - Delete +- `` - Delete -- `` and `` - Simulates an actual "enter" or "return" keypress. +- `` and `` - Simulates an actual "enter" or "return" keypress. -- `` - Simulates pressing the escape key. +- `` - Simulates pressing the escape key. -- `` - Simulates pressing the tab key. +- `` - Simulates pressing the tab key. -- `` - `` - Simulates pressing a function key. +- `` - `` - Simulates pressing a function key. -- `` `` `` `` - Simulates pressing an arrow key. +- `` `` `` `` - Simulates pressing an arrow key. -- `` - Simulates pressing the spacebar. +- `` - Simulates pressing the spacebar. -- `` - Simulates pressing the insert key. +- `` - Simulates pressing the insert key. -- `` `` - Simulates pressing the home and end keys. +- `` `` - Simulates pressing the home and end keys. -- `` `` - Simulates pressing the page up and page down keys. +- `` `` - Simulates pressing the page up and page down keys. -- `` `` - Simulates pressing the alt key. +- `` `` - Simulates pressing the alt key. -- `` `` - Simulates pressing the ctrl key. +- `` `` - Simulates pressing the ctrl key. -- `` `` - Simulates pressing the shift key. +- `` `` - Simulates pressing the shift key. -- `` `` - Simulates pressing and holding the alt key. +- `` `` - Simulates pressing and holding the alt key. -- `` `` - Simulates pressing and holding the ctrl +- `` `` - Simulates pressing and holding the ctrl key. -- `` `` - Simulates pressing and holding the +- `` `` - Simulates pressing and holding the shift key. -- `` `` - Simulates releasing a held alt key. +- `` `` - Simulates releasing a held alt key. -- `` `` - Simulates releasing a held ctrl key. +- `` `` - Simulates releasing a held ctrl key. -- `` `` - Simulates releasing a held shift key. +- `` `` - Simulates releasing a held shift key. -- `` `` `` - Adds a 1, 5 or 10 second pause before +- `` `` `` - Adds a 1, 5 or 10 second pause before sending any additional keys. This is useful if you have to generally wait for the UI to update before typing more. @@ -251,7 +251,7 @@ In addition to the special keys, each command to type is treated as a [template engine](/docs/templates/engine.html). The available variables are: -- `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server +- `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server that is started serving the directory specified by the `http_directory` configuration parameter. If `http_directory` isn't specified, these will be blank! @@ -259,7 +259,7 @@ available variables are: Example boot command. This is actually a working boot command used to start an Ubuntu 12.04 installer: -```text +``` text [ "", "/install/vmlinuz noapic ", diff --git a/website/source/docs/builders/vmware.html.md b/website/source/docs/builders/vmware.html.md index 8f36505e1..9ec849fa6 100644 --- a/website/source/docs/builders/vmware.html.md +++ b/website/source/docs/builders/vmware.html.md @@ -1,10 +1,10 @@ --- +description: | + The VMware Packer builder is able to create VMware virtual machines for use + with any VMware product. layout: docs -sidebar_current: docs-builders-vmware -page_title: VMware - Builders -description: |- - The VMware Packer builder is able to create VMware virtual machines for use - with any VMware product. +page_title: 'VMware - Builders' +sidebar_current: 'docs-builders-vmware' --- # VMware Builder @@ -16,14 +16,14 @@ Packer actually comes with multiple builders able to create VMware machines, depending on the strategy you want to use to build the image. Packer supports the following VMware builders: -- [vmware-iso](/docs/builders/vmware-iso.html) - Starts from an ISO file, - creates a brand new VMware VM, installs an OS, provisions software within the - OS, then exports that machine to create an image. This is best for people who - want to start from scratch. +- [vmware-iso](/docs/builders/vmware-iso.html) - Starts from an ISO file, + creates a brand new VMware VM, installs an OS, provisions software within the + OS, then exports that machine to create an image. This is best for people who + want to start from scratch. -- [vmware-vmx](/docs/builders/vmware-vmx.html) - This builder imports an - existing VMware machine (from a VMX file), runs provisioners on top of that - VM, and exports that machine to create an image. This is best if you have an - existing VMware VM you want to use as the source. As an additional benefit, - you can feed the artifact of this builder back into Packer to iterate on a - machine. +- [vmware-vmx](/docs/builders/vmware-vmx.html) - This builder imports an + existing VMware machine (from a VMX file), runs provisioners on top of that + VM, and exports that machine to create an image. This is best if you have an + existing VMware VM you want to use as the source. As an additional benefit, + you can feed the artifact of this builder back into Packer to iterate on a + machine. diff --git a/website/source/docs/commands/build.html.md b/website/source/docs/commands/build.html.md index 41d056ae5..7d331159e 100644 --- a/website/source/docs/commands/build.html.md +++ b/website/source/docs/commands/build.html.md @@ -1,12 +1,12 @@ --- +description: | + The `packer build` command takes a template and runs all the builds within it + in order to generate a set of artifacts. The various builds specified within a + template are executed in parallel, unless otherwise specified. And the + artifacts that are created will be outputted at the end of the build. layout: docs -sidebar_current: docs-commands-build -page_title: packer build - Commands -description: |- - The `packer build` command takes a template and runs all the builds within it - in order to generate a set of artifacts. The various builds specified within a - template are executed in parallel, unless otherwise specified. And the - artifacts that are created will be outputted at the end of the build. +page_title: 'packer build - Commands' +sidebar_current: 'docs-commands-build' --- # `build` Command @@ -18,34 +18,34 @@ that are created will be outputted at the end of the build. ## Options -- `-color=false` - Disables colorized output. Enabled by default. +- `-color=false` - Disables colorized output. Enabled by default. -- `-debug` - Disables parallelization and enables debug mode. Debug mode flags - the builders that they should output debugging information. The exact behavior - of debug mode is left to the builder. In general, builders usually will stop - between each step, waiting for keyboard input before continuing. This will - allow the user to inspect state and so on. +- `-debug` - Disables parallelization and enables debug mode. Debug mode flags + the builders that they should output debugging information. The exact behavior + of debug mode is left to the builder. In general, builders usually will stop + between each step, waiting for keyboard input before continuing. This will + allow the user to inspect state and so on. -- `-except=foo,bar,baz` - Builds all the builds except those with the given - comma-separated names. Build names by default are the names of their builders, - unless a specific `name` attribute is specified within the configuration. +- `-except=foo,bar,baz` - Builds all the builds except those with the given + comma-separated names. Build names by default are the names of their builders, + unless a specific `name` attribute is specified within the configuration. -- `-force` - Forces a builder to run when artifacts from a previous build - prevent a build from running. The exact behavior of a forced build is left to - the builder. In general, a builder supporting the forced build will remove the - artifacts from the previous build. This will allow the user to repeat a build - without having to manually clean these artifacts beforehand. +- `-force` - Forces a builder to run when artifacts from a previous build + prevent a build from running. The exact behavior of a forced build is left to + the builder. In general, a builder supporting the forced build will remove the + artifacts from the previous build. This will allow the user to repeat a build + without having to manually clean these artifacts beforehand. -- `-on-error=cleanup` (default), `-on-error=abort`, `-on-error=ask` - Selects - what to do when the build fails. `cleanup` cleans up after the previous - steps, deleting temporary files and virtual machines. `abort` exits without - any cleanup, which might require the next build to use `-force`. `ask` - presents a prompt and waits for you to decide to clean up, abort, or retry the - failed step. +- `-on-error=cleanup` (default), `-on-error=abort`, `-on-error=ask` - Selects + what to do when the build fails. `cleanup` cleans up after the previous + steps, deleting temporary files and virtual machines. `abort` exits without + any cleanup, which might require the next build to use `-force`. `ask` + presents a prompt and waits for you to decide to clean up, abort, or retry the + failed step. -- `-only=foo,bar,baz` - Only build the builds with the given comma-separated - names. Build names by default are the names of their builders, unless a - specific `name` attribute is specified within the configuration. +- `-only=foo,bar,baz` - Only build the builds with the given comma-separated + names. Build names by default are the names of their builders, unless a + specific `name` attribute is specified within the configuration. -- `-parallel=false` - Disable parallelization of multiple builders (on by - default). +- `-parallel=false` - Disable parallelization of multiple builders (on by + default). diff --git a/website/source/docs/commands/fix.html.md b/website/source/docs/commands/fix.html.md index 72024325c..1793d3275 100644 --- a/website/source/docs/commands/fix.html.md +++ b/website/source/docs/commands/fix.html.md @@ -1,12 +1,12 @@ --- +description: | + The `packer fix` command takes a template and finds backwards incompatible + parts of it and brings it up to date so it can be used with the latest version + of Packer. After you update to a new Packer release, you should run the fix + command to make sure your templates work with the new release. layout: docs -sidebar_current: docs-commands-fix -page_title: packer fix - Commands -description: |- - The `packer fix` command takes a template and finds backwards incompatible - parts of it and brings it up to date so it can be used with the latest version - of Packer. After you update to a new Packer release, you should run the fix - command to make sure your templates work with the new release. +page_title: 'packer fix - Commands' +sidebar_current: 'docs-commands-fix' --- # `fix` Command @@ -20,7 +20,7 @@ The fix command will output the changed template to standard out, so you should redirect standard using standard OS-specific techniques if you want to save it to a file. For example, on Linux systems, you may want to do this: -```shell +``` shell $ packer fix old.json > new.json ``` @@ -28,7 +28,7 @@ If fixing fails for any reason, the fix command will exit with a non-zero exit status. Error messages appear on standard error, so if you're redirecting output, you'll still see error messages. --> **Even when Packer fix doesn't do anything** to the template, the template +-> **Even when Packer fix doesn't do anything** to the template, the template will be outputted to standard out. Things such as configuration key ordering and indentation may be changed. The output format however, is pretty-printed for human readability. diff --git a/website/source/docs/commands/index.html.md b/website/source/docs/commands/index.html.md index b31f8e128..abbb84855 100644 --- a/website/source/docs/commands/index.html.md +++ b/website/source/docs/commands/index.html.md @@ -1,13 +1,13 @@ --- +description: | + Packer is controlled using a command-line interface. All interaction with + Packer is done via the `packer` tool. Like many other command-line tools, the + `packer` tool takes a subcommand to execute, and that subcommand may have + additional options as well. Subcommands are executed with `packer SUBCOMMAND`, + where "SUBCOMMAND" is the actual command you wish to execute. layout: docs -sidebar_current: docs-commands page_title: Commands -description: |- - Packer is controlled using a command-line interface. All interaction with - Packer is done via the `packer` tool. Like many other command-line tools, the - `packer` tool takes a subcommand to execute, and that subcommand may have - additional options as well. Subcommands are executed with `packer SUBCOMMAND`, - where "SUBCOMMAND" is the actual command you wish to execute. +sidebar_current: 'docs-commands' --- # Packer Commands (CLI) @@ -46,7 +46,7 @@ The machine-readable output format can be enabled by passing the output to become machine-readable on stdout. Logging, if enabled, continues to appear on stderr. An example of the output is shown below: -```text +``` text $ packer -machine-readable version 1376289459,,version,0.2.4 1376289459,,version-prerelease, @@ -58,7 +58,7 @@ The format will be covered in more detail later. But as you can see, the output immediately becomes machine-friendly. Try some other commands with the `-machine-readable` flag to see! -~> The `-machine-readable` flag is designed for automated environments and is +~> The `-machine-readable` flag is designed for automated environments and is mutually-exclusive with the `-debug` flag, which is designed for interactive environments. @@ -70,26 +70,26 @@ This makes it more convenient to parse using standard Unix tools such as `awk` o The format is: -```text +``` text timestamp,target,type,data... ``` Each component is explained below: -- `timestamp` is a Unix timestamp in UTC of when the message was printed. +- `timestamp` is a Unix timestamp in UTC of when the message was printed. -- `target` is the target of the following output. This is empty if the message - is related to Packer globally. Otherwise, this is generally a build name so - you can relate output to a specific build while parallel builds are running. +- `target` is the target of the following output. This is empty if the message + is related to Packer globally. Otherwise, this is generally a build name so + you can relate output to a specific build while parallel builds are running. -- `type` is the type of machine-readable message being outputted. There are a - set of standard types which are covered later, but each component of Packer - (builders, provisioners, etc.) may output their own custom types as well, - allowing the machine-readable output to be infinitely flexible. +- `type` is the type of machine-readable message being outputted. There are a + set of standard types which are covered later, but each component of Packer + (builders, provisioners, etc.) may output their own custom types as well, + allowing the machine-readable output to be infinitely flexible. -- `data` is zero or more comma-separated values associated with the prior type. - The exact amount and meaning of this data is type-dependent, so you must read - the documentation associated with the type to understand fully. +- `data` is zero or more comma-separated values associated with the prior type. + The exact amount and meaning of this data is type-dependent, so you must read + the documentation associated with the type to understand fully. Within the format, if data contains a comma, it is replaced with `%!(PACKER_COMMA)`. This was preferred over an escape character such as `\'` diff --git a/website/source/docs/commands/inspect.html.md b/website/source/docs/commands/inspect.html.md index 22cf3e592..b96f2e27b 100644 --- a/website/source/docs/commands/inspect.html.md +++ b/website/source/docs/commands/inspect.html.md @@ -1,13 +1,13 @@ --- +description: | + The `packer inspect` command takes a template and outputs the various + components a template defines. This can help you quickly learn about a + template without having to dive into the JSON itself. The command will tell + you things like what variables a template accepts, the builders it defines, + the provisioners it defines and the order they'll run, and more. layout: docs -sidebar_current: docs-commands-inspect -page_title: packer inspect - Commands -description: |- - The `packer inspect` command takes a template and outputs the various - components a template defines. This can help you quickly learn about a - template without having to dive into the JSON itself. The command will tell - you things like what variables a template accepts, the builders it defines, - the provisioners it defines and the order they'll run, and more. +page_title: 'packer inspect - Commands' +sidebar_current: 'docs-commands-inspect' --- # `inspect` Command @@ -30,7 +30,7 @@ your template by necessity. Given a basic template, here is an example of what the output might look like: -```text +``` text $ packer inspect template.json Variables and their defaults: diff --git a/website/source/docs/commands/push.html.md b/website/source/docs/commands/push.html.md index c0873331b..c30eb1587 100644 --- a/website/source/docs/commands/push.html.md +++ b/website/source/docs/commands/push.html.md @@ -1,10 +1,10 @@ --- +description: | + The `packer push` command uploads a template and other required files to the + Atlas build service, which will run your packer build for you. layout: docs -sidebar_current: docs-commands-push -page_title: packer push - Commands -description: |- - The `packer push` command uploads a template and other required files to the - Atlas build service, which will run your packer build for you. +page_title: 'packer push - Commands' +sidebar_current: 'docs-commands-push' --- # `push` Command @@ -23,7 +23,7 @@ artifacts in Atlas. In order to do that you will also need to configure the [Atlas post-processor](/docs/post-processors/atlas.html). This is optional, and both the post-processor and push commands can be used independently. -!> The push command uploads your template and other files, like provisioning +!> The push command uploads your template and other files, like provisioning scripts, to Atlas. Take care not to upload files that you don't intend to, like secrets or large binaries. **If you have secrets in your Packer template, you should [move them into environment @@ -35,46 +35,46 @@ configuration using the options below. ## Options -- `-token` - Your access token for the Atlas API. Login to Atlas to [generate an - Atlas Token](https://atlas.hashicorp.com/settings/tokens). The most convenient - way to configure your token is to set it to the `ATLAS_TOKEN` environment - variable, but you can also use `-token` on the command line. +- `-token` - Your access token for the Atlas API. Login to Atlas to [generate an + Atlas Token](https://atlas.hashicorp.com/settings/tokens). The most convenient + way to configure your token is to set it to the `ATLAS_TOKEN` environment + variable, but you can also use `-token` on the command line. -- `-name` - The name of the build in the service. This typically looks like - `hashicorp/precise64`, which follows the form `/`. This - must be specified here or in your template. +- `-name` - The name of the build in the service. This typically looks like + `hashicorp/precise64`, which follows the form `/`. This + must be specified here or in your template. -- `-sensitive` - A comma-separated list of variables that should be marked as - sensitive in the Terraform Enterprise ui. These variables' keys will be - visible, but their values will be redacted. example usage: - `-var 'supersecretpassword=mypassword' -sensitive=supersecretpassword1` +- `-sensitive` - A comma-separated list of variables that should be marked as + sensitive in the Terraform Enterprise ui. These variables' keys will be + visible, but their values will be redacted. example usage: + `-var 'supersecretpassword=mypassword' -sensitive=supersecretpassword1` -- `-var` - Set a variable in your packer template. This option can be used - multiple times. This is useful for setting version numbers for your build. +- `-var` - Set a variable in your packer template. This option can be used + multiple times. This is useful for setting version numbers for your build. -- `-var-file` - Set template variables from a file. +- `-var-file` - Set template variables from a file. ## Environment Variables -- `ATLAS_CAFILE` (path) - This should be a path to an X.509 PEM-encoded public - key. If specified, this will be used to validate the certificate authority - that signed certificates used by an Atlas installation. +- `ATLAS_CAFILE` (path) - This should be a path to an X.509 PEM-encoded public + key. If specified, this will be used to validate the certificate authority + that signed certificates used by an Atlas installation. -- `ATLAS_CAPATH` - This should be a path which contains an X.509 PEM-encoded - public key file. If specified, this will be used to validate the certificate - authority that signed certificates used by an Atlas installation. +- `ATLAS_CAPATH` - This should be a path which contains an X.509 PEM-encoded + public key file. If specified, this will be used to validate the certificate + authority that signed certificates used by an Atlas installation. ## Examples Push a Packer template: -```shell +``` shell $ packer push template.json ``` Push a Packer template with a custom token: -```shell +``` shell $ packer push -token ABCD1234 template.json ``` diff --git a/website/source/docs/commands/validate.html.md b/website/source/docs/commands/validate.html.md index 972e2a218..d1102f2d1 100644 --- a/website/source/docs/commands/validate.html.md +++ b/website/source/docs/commands/validate.html.md @@ -1,12 +1,12 @@ --- +description: | + The `packer validate` Packer command is used to validate the syntax and + configuration of a template. The command will return a zero exit status on + success, and a non-zero exit status on failure. Additionally, if a template + doesn't validate, any error messages will be outputted. layout: docs -sidebar_current: docs-commands-validate -page_title: packer validate - Commands -description: |- - The `packer validate` Packer command is used to validate the syntax and - configuration of a template. The command will return a zero exit status on - success, and a non-zero exit status on failure. Additionally, if a template - doesn't validate, any error messages will be outputted. +page_title: 'packer validate - Commands' +sidebar_current: 'docs-commands-validate' --- # `validate` Command @@ -19,7 +19,7 @@ be outputted. Example usage: -```text +``` text $ packer validate my-template.json Template validation failed. Errors are shown below. @@ -30,5 +30,5 @@ Errors validating build 'vmware'. 1 error(s) occurred: ## Options -- `-syntax-only` - Only the syntax of the template is checked. The configuration - is not validated. +- `-syntax-only` - Only the syntax of the template is checked. The configuration + is not validated. diff --git a/website/source/docs/extending/custom-builders.html.md b/website/source/docs/extending/custom-builders.html.md index dffabc838..a2ea25165 100644 --- a/website/source/docs/extending/custom-builders.html.md +++ b/website/source/docs/extending/custom-builders.html.md @@ -1,10 +1,10 @@ --- +description: | + It is possible to write custom builders using the Packer plugin interface, and + this page documents how to do that. layout: docs -sidebar_current: docs-extending-custom-builders -page_title: Custom Builders - Extending -description: |- - It is possible to write custom builders using the Packer plugin interface, and - this page documents how to do that. +page_title: 'Custom Builders - Extending' +sidebar_current: 'docs-extending-custom-builders' --- # Custom Builders @@ -19,7 +19,7 @@ plugin interface, and this page documents how to do that. Prior to reading this page, it is assumed you have read the page on [plugin development basics](/docs/extending/plugins.html). -~> **Warning!** This is an advanced topic. If you're new to Packer, we +~> **Warning!** This is an advanced topic. If you're new to Packer, we recommend getting a bit more comfortable before you dive into writing plugins. ## The Interface @@ -29,7 +29,7 @@ interface. It is reproduced below for reference. The actual interface in the source code contains some basic documentation as well explaining what each method should do. -```go +``` go type Builder interface { Prepare(...interface{}) error Run(ui Ui, hook Hook, cache Cache) (Artifact, error) @@ -134,14 +134,14 @@ When the machine is ready to be provisioned, run the `packer.HookProvision` hook, making sure the communicator is not nil, since this is required for provisioners. An example of calling the hook is shown below: -```go +``` go hook.Run(packer.HookProvision, ui, comm, nil) ``` At this point, Packer will run the provisioners and no additional work is necessary. --> **Note:** Hooks are still undergoing thought around their general design +-> **Note:** Hooks are still undergoing thought around their general design and will likely change in a future version. They aren't fully "baked" yet, so they aren't documented here other than to tell you how to hook in provisioners. diff --git a/website/source/docs/extending/custom-post-processors.html.md b/website/source/docs/extending/custom-post-processors.html.md index 901c08676..f4aa4d731 100644 --- a/website/source/docs/extending/custom-post-processors.html.md +++ b/website/source/docs/extending/custom-post-processors.html.md @@ -1,10 +1,10 @@ --- +description: | + Packer Post-processors are the components of Packer that transform one + artifact into another, for example by compressing files, or uploading them. layout: docs -sidebar_current: docs-extending-custom-post-processors -page_title: Custom Post-Processors - Extending -description: |- - Packer Post-processors are the components of Packer that transform one - artifact into another, for example by compressing files, or uploading them. +page_title: 'Custom Post-Processors - Extending' +sidebar_current: 'docs-extending-custom-post-processors' --- # Custom Post-Processors @@ -24,7 +24,7 @@ development basics](/docs/extending/plugins.html). Post-processor plugins implement the `packer.PostProcessor` interface and are served using the `plugin.ServePostProcessor` function. -~> **Warning!** This is an advanced topic. If you're new to Packer, we +~> **Warning!** This is an advanced topic. If you're new to Packer, we recommend getting a bit more comfortable before you dive into writing plugins. ## The Interface @@ -34,7 +34,7 @@ The interface that must be implemented for a post-processor is the actual interface in the source code contains some basic documentation as well explaining what each method should do. -```go +``` go type PostProcessor interface { Configure(interface{}) error PostProcess(Ui, Artifact) (a Artifact, keep bool, err error) diff --git a/website/source/docs/extending/custom-provisioners.html.md b/website/source/docs/extending/custom-provisioners.html.md index 2ac3c78e8..738f6cdf5 100644 --- a/website/source/docs/extending/custom-provisioners.html.md +++ b/website/source/docs/extending/custom-provisioners.html.md @@ -1,12 +1,12 @@ --- +description: | + Packer Provisioners are the components of Packer that install and configure + software into a running machine prior to turning that machine into an image. + An example of a provisioner is the shell provisioner, which runs shell scripts + within the machines. layout: docs -sidebar_current: docs-extending-custom-provisioners -page_title: Custom Provisioners - Extending -description: |- - Packer Provisioners are the components of Packer that install and configure - software into a running machine prior to turning that machine into an image. - An example of a provisioner is the shell provisioner, which runs shell scripts - within the machines. +page_title: 'Custom Provisioners - Extending' +sidebar_current: 'docs-extending-custom-provisioners' --- # Custom Provisioners @@ -23,7 +23,7 @@ development basics](/docs/extending/plugins.html). Provisioner plugins implement the `packer.Provisioner` interface and are served using the `plugin.ServeProvisioner` function. -~> **Warning!** This is an advanced topic. If you're new to Packer, we +~> **Warning!** This is an advanced topic. If you're new to Packer, we recommend getting a bit more comfortable before you dive into writing plugins. ## The Interface @@ -33,7 +33,7 @@ The interface that must be implemented for a provisioner is the actual interface in the source code contains some basic documentation as well explaining what each method should do. -```go +``` go type Provisioner interface { Prepare(...interface{}) error Provision(Ui, Communicator) error @@ -90,7 +90,7 @@ itself](https://github.com/hashicorp/packer/blob/master/packer/communicator.go) is really great as an overview of how to use the interface. You should begin by reading this. Once you have read it, you can see some example usage below: -```go +``` go // Build the remote command. var cmd packer.RemoteCmd cmd.Command = "echo foo" diff --git a/website/source/docs/extending/index.html.md b/website/source/docs/extending/index.html.md index ad57063c4..694ad78f7 100644 --- a/website/source/docs/extending/index.html.md +++ b/website/source/docs/extending/index.html.md @@ -1,11 +1,11 @@ --- +description: | + Packer is designed to be extensible. Because the surface area for workloads is + infinite, Packer supports plugins for builders, provisioners, and + post-processors. layout: docs page_title: Extending -sidebar_current: docs-extending -description: |- - Packer is designed to be extensible. Because the surface area for workloads is - infinite, Packer supports plugins for builders, provisioners, and - post-processors. +sidebar_current: 'docs-extending' --- # Extending Packer diff --git a/website/source/docs/extending/plugins.html.md b/website/source/docs/extending/plugins.html.md index 29ec9e183..3269fbbce 100644 --- a/website/source/docs/extending/plugins.html.md +++ b/website/source/docs/extending/plugins.html.md @@ -1,11 +1,11 @@ --- +description: | + Packer Plugins allow new functionality to be added to Packer without modifying + the core source code. Packer plugins are able to add new commands, builders, + provisioners, hooks, and more. layout: docs -sidebar_current: docs-extending-plugins -page_title: Plugins - Extending -description: |- - Packer Plugins allow new functionality to be added to Packer without modifying - the core source code. Packer plugins are able to add new commands, builders, - provisioners, hooks, and more. +page_title: 'Plugins - Extending' +sidebar_current: 'docs-extending-plugins' --- # Plugins @@ -80,7 +80,7 @@ assumed that you're familiar with the language. This page will not be a Go language tutorial. Thankfully, if you are familiar with Go, the Go toolchain provides many conveniences to help to develop Packer plugins. -~> **Warning!** This is an advanced topic. If you're new to Packer, we +~> **Warning!** This is an advanced topic. If you're new to Packer, we recommend getting a bit more comfortable before you dive into writing plugins. ### Plugin System Architecture @@ -131,7 +131,7 @@ There are two steps involved in creating a plugin: A basic example is shown below. In this example, assume the `Builder` struct implements the `packer.Builder` interface: -```go +``` go import ( "github.com/hashicorp/packer/packer/plugin" ) @@ -155,7 +155,7 @@ using standard installation procedures. The specifics of how to implement each type of interface are covered in the relevant subsections available in the navigation to the left. -~> **Lock your dependencies!** Unfortunately, Go's dependency management +~> **Lock your dependencies!** Unfortunately, Go's dependency management story is fairly sad. There are various unofficial methods out there for locking dependencies, and using one of them is highly recommended since the Packer codebase will continue to improve, potentially breaking APIs along the way until @@ -171,7 +171,7 @@ visible on stderr when the `PACKER_LOG` environmental is set. Packer will prefix any logs from plugins with the path to that plugin to make it identifiable where the logs come from. Some example logs are shown below: -```text +``` text 2013/06/10 21:44:43 ui: Available commands are: 2013/06/10 21:44:43 Loading command: build 2013/06/10 21:44:43 packer-command-build: 2013/06/10 21:44:43 Plugin minimum port: 10000 @@ -203,7 +203,7 @@ While developing plugins, you can configure your Packer configuration to point directly to the compiled plugin in order to test it. For example, building the CustomCloud plugin, I may configure packer like so: -```json +``` json { "builders": { "custom-cloud": "/an/absolute/path/to/packer-builder-custom-cloud" diff --git a/website/source/docs/index.html.md b/website/source/docs/index.html.md index a285798dc..840804e94 100644 --- a/website/source/docs/index.html.md +++ b/website/source/docs/index.html.md @@ -1,11 +1,11 @@ --- +description: | + Welcome to the Packer documentation! This documentation is more of a reference + guide for all available features and options in Packer. If you're just getting + started with Packer, please start with the introduction and getting started + guide instead. layout: docs page_title: Documentation -description: |- - Welcome to the Packer documentation! This documentation is more of a reference - guide for all available features and options in Packer. If you're just getting - started with Packer, please start with the introduction and getting started - guide instead. --- # Packer Documentation diff --git a/website/source/docs/install/index.html.md b/website/source/docs/install/index.html.md index b35a888b7..fa8af69b4 100644 --- a/website/source/docs/install/index.html.md +++ b/website/source/docs/install/index.html.md @@ -1,19 +1,19 @@ --- +description: | + Installing Packer is simple. You can download a precompiled binary or compile + from source. This page details both methods. layout: docs -sidebar_current: docs-install page_title: Install -description: |- - Installing Packer is simple. You can download a precompiled binary or compile - from source. This page details both methods. +sidebar_current: 'docs-install' --- # Install Packer Installing Packer is simple. There are two approaches to installing Packer: -1. Using a [precompiled binary](#precompiled-binaries) +1. Using a [precompiled binary](#precompiled-binaries) -1. Installing [from source](#compiling-from-source) +2. Installing [from source](#compiling-from-source) Downloading a precompiled binary is easiest, and we provide downloads over TLS along with SHA256 sums to verify the binary. We also distribute a PGP signature @@ -38,27 +38,27 @@ To compile from source, you will need [Go](https://golang.org) installed and configured properly (including a `GOPATH` environment variable set), as well as a copy of [`git`](https://www.git-scm.com/) in your `PATH`. - 1. Clone the Packer repository from GitHub into your `GOPATH`: +1. Clone the Packer repository from GitHub into your `GOPATH`: - ```shell + ``` shell $ mkdir -p $GOPATH/src/github.com/mitchellh && cd $! $ git clone https://github.com/mitchellh/packer.git $ cd packer ``` - 1. Bootstrap the project. This will download and compile libraries and tools - needed to compile Packer: +2. Bootstrap the project. This will download and compile libraries and tools + needed to compile Packer: - ```shell + ``` shell $ make bootstrap ``` - 1. Build Packer for your current system and put the - binary in `./bin/` (relative to the git checkout). The `make dev` target is - just a shortcut that builds `packer` for only your local build environment (no - cross-compiled targets). +3. Build Packer for your current system and put the + binary in `./bin/` (relative to the git checkout). The `make dev` target is + just a shortcut that builds `packer` for only your local build environment (no + cross-compiled targets). - ```shell + ``` shell $ make dev ``` @@ -68,6 +68,6 @@ To verify Packer is properly installed, run `packer -v` on your system. You should see help output. If you are executing it from the command line, make sure it is on your PATH or you may get an error about Packer not being found. -```shell +``` shell $ packer -v ``` diff --git a/website/source/docs/other/core-configuration.html.md b/website/source/docs/other/core-configuration.html.md index 024afd8d7..234cf4563 100644 --- a/website/source/docs/other/core-configuration.html.md +++ b/website/source/docs/other/core-configuration.html.md @@ -1,12 +1,12 @@ --- +description: | + There are a few configuration settings that affect Packer globally by + configuring the core of Packer. These settings all have reasonable defaults, + so you generally don't have to worry about it until you want to tweak a + configuration. layout: docs -sidebar_current: docs-other-core-configuration -page_title: Core Configuration - Other -description: |- - There are a few configuration settings that affect Packer globally by - configuring the core of Packer. These settings all have reasonable defaults, - so you generally don't have to worry about it until you want to tweak a - configuration. +page_title: 'Core Configuration - Other' +sidebar_current: 'docs-other-core-configuration' --- # Core Configuration @@ -32,13 +32,13 @@ The format of the configuration file is basic JSON. Below is the list of all available configuration parameters for the core configuration file. None of these are required, since all have sane defaults. -- `plugin_min_port` and `plugin_max_port` (integer) - These are the minimum and - maximum ports that Packer uses for communication with plugins, since plugin - communication happens over TCP connections on your local host. By default - these are 10,000 and 25,000, respectively. Be sure to set a fairly wide range - here, since Packer can easily use over 25 ports on a single run. +- `plugin_min_port` and `plugin_max_port` (integer) - These are the minimum and + maximum ports that Packer uses for communication with plugins, since plugin + communication happens over TCP connections on your local host. By default + these are 10,000 and 25,000, respectively. Be sure to set a fairly wide range + here, since Packer can easily use over 25 ports on a single run. -- `builders`, `commands`, `post-processors`, and `provisioners` are objects that - are used to install plugins. The details of how exactly these are set is - covered in more detail in the [installing plugins documentation - page](/docs/extending/plugins.html). +- `builders`, `commands`, `post-processors`, and `provisioners` are objects that + are used to install plugins. The details of how exactly these are set is + covered in more detail in the [installing plugins documentation + page](/docs/extending/plugins.html). diff --git a/website/source/docs/other/debugging.html.md b/website/source/docs/other/debugging.html.md index f6b721110..b34f1485b 100644 --- a/website/source/docs/other/debugging.html.md +++ b/website/source/docs/other/debugging.html.md @@ -1,11 +1,11 @@ --- +description: | + Packer strives to be stable and bug-free, but issues inevitably arise where + certain things may not work entirely correctly, or may not appear to work + correctly. layout: docs -sidebar_current: docs-other-debugging -page_title: Debugging - Other -description: |- - Packer strives to be stable and bug-free, but issues inevitably arise where - certain things may not work entirely correctly, or may not appear to work - correctly. +page_title: 'Debugging - Other' +sidebar_current: 'docs-other-debugging' --- # Debugging Packer Builds @@ -66,7 +66,7 @@ In Windows you can set the detailed logs environmental variable `PACKER_LOG` or the log variable `PACKER_LOG_PATH` using powershell environment variables. For example: -```powershell +``` powershell $env:PACKER_LOG=1 $env:PACKER_LOG_PATH="packerlog.txt" ``` @@ -80,10 +80,8 @@ Issues may arise using and building Ubuntu AMIs where common packages that *should* be installed from Ubuntu's Main repository are not found during a provisioner step: -``` -amazon-ebs: No candidate version found for build-essential -amazon-ebs: No candidate version found for build-essential -``` + amazon-ebs: No candidate version found for build-essential + amazon-ebs: No candidate version found for build-essential This, obviously can cause problems where a build is unable to finish successfully as the proper packages cannot be provisioned correctly. The problem @@ -94,7 +92,7 @@ Adding the following provisioner to the packer template, allows for the cloud-init process to fully finish before packer starts provisioning the source AMI. -```json +``` json { "type": "shell", "inline": [ @@ -103,7 +101,6 @@ AMI. } ``` - ## Issues when using numerous Builders/Provisioners/Post-Processors Packer uses a separate process for each builder, provisioner, post-processor, @@ -111,13 +108,12 @@ and plugin. In certain cases, if you have too many of these, you can run out of [file descriptors](https://en.wikipedia.org/wiki/File_descriptor). This results in an error that might look like -```text +``` text error initializing provisioner 'powershell': fork/exec /files/go/bin/packer: too many open files ``` -On Unix systems, you can check what your file descriptor limit is with `ulimit --Sn`. You should check with your OS vendor on how to raise this limit. +On Unix systems, you can check what your file descriptor limit is with `ulimit -Sn`. You should check with your OS vendor on how to raise this limit. ## Issues when using long temp directory @@ -126,7 +122,7 @@ directory for temporary files. Some operating systems place a limit on the length of the socket name, usually between 80 and 110 characters. If you get an error like this (for any builder, not just docker): -```text +``` text Failed to initialize build 'docker': error initializing builder 'docker': plugin exited before we could connect ``` diff --git a/website/source/docs/other/environment-variables.html.md b/website/source/docs/other/environment-variables.html.md index b32277b78..dca39ae9e 100644 --- a/website/source/docs/other/environment-variables.html.md +++ b/website/source/docs/other/environment-variables.html.md @@ -1,9 +1,8 @@ --- +description: 'Packer uses a variety of environmental variables.' layout: docs -sidebar_current: docs-other-environment-variables -page_title: Environment Variables - Other -description: |- - Packer uses a variety of environmental variables. +page_title: 'Environment Variables - Other' +sidebar_current: 'docs-other-environment-variables' --- # Environment Variables for Packer @@ -11,38 +10,38 @@ description: |- Packer uses a variety of environmental variables. A listing and description of each can be found below: -- `PACKER_CACHE_DIR` - The location of the packer cache. +- `PACKER_CACHE_DIR` - The location of the packer cache. -- `PACKER_CONFIG` - The location of the core configuration file. The format of +- `PACKER_CONFIG` - The location of the core configuration file. The format of the configuration file is basic JSON. See the [core configuration page](/docs/other/core-configuration.html). -- `PACKER_LOG` - Setting this to any value other than "" (empty string) or "0" will enable the logger. See the +- `PACKER_LOG` - Setting this to any value other than "" (empty string) or "0" will enable the logger. See the [debugging page](/docs/other/debugging.html). -- `PACKER_LOG_PATH` - The location of the log file. Note: `PACKER_LOG` must be +- `PACKER_LOG_PATH` - The location of the log file. Note: `PACKER_LOG` must be set for any logging to occur. See the [debugging page](/docs/other/debugging.html). -- `PACKER_NO_COLOR` - Setting this to any value will disable color in +- `PACKER_NO_COLOR` - Setting this to any value will disable color in the terminal. -- `PACKER_PLUGIN_MAX_PORT` - The maximum port that Packer uses for +- `PACKER_PLUGIN_MAX_PORT` - The maximum port that Packer uses for communication with plugins, since plugin communication happens over TCP connections on your local host. The default is 25,000. See the [core configuration page](/docs/other/core-configuration.html). -- `PACKER_PLUGIN_MIN_PORT` - The minimum port that Packer uses for +- `PACKER_PLUGIN_MIN_PORT` - The minimum port that Packer uses for communication with plugins, since plugin communication happens over TCP connections on your local host. The default is 10,000. See the [core configuration page](/docs/other/core-configuration.html). -- `CHECKPOINT_DISABLE` - When Packer is invoked it sometimes calls out to +- `CHECKPOINT_DISABLE` - When Packer is invoked it sometimes calls out to [checkpoint.hashicorp.com](https://checkpoint.hashicorp.com/) to look for new versions of Packer. If you want to disable this for security or privacy reasons, you can set this environment variable to `1`. -- `TMPDIR` (Unix) / `TMP` (Windows) - The location of the directory used for temporary files (defaults +- `TMPDIR` (Unix) / `TMP` (Windows) - The location of the directory used for temporary files (defaults to `/tmp` on Linux/Unix and `%USERPROFILE%\AppData\Local\Temp` on Windows Vista and above). It might be necessary to customize it when working with large files since `/tmp` is a memory-backed filesystem in some Linux diff --git a/website/source/docs/post-processors/alicloud-import.html.md b/website/source/docs/post-processors/alicloud-import.html.md index 5e0e645da..eec52c90d 100644 --- a/website/source/docs/post-processors/alicloud-import.html.md +++ b/website/source/docs/post-processors/alicloud-import.html.md @@ -4,7 +4,7 @@ description: | various builders and imports it to an Alicloud customized image list. layout: docs page_title: 'Alicloud Import Post-Processor' -... +--- # Aicloud Import Post-Processor @@ -27,59 +27,59 @@ two categories: required and optional parameters. ### Required: -- `access_key` (string) - This is the Alicloud access key. It must be provided, - but it can also be sourced from the `ALICLOUD_ACCESS_KEY` environment - variable. +- `access_key` (string) - This is the Alicloud access key. It must be provided, + but it can also be sourced from the `ALICLOUD_ACCESS_KEY` environment + variable. -- `secret_key` (string) - This is the Alicloud secret key. It must be provided, - but it can also be sourced from the `ALICLOUD_SECRET_KEY` environment - variable. +- `secret_key` (string) - This is the Alicloud secret key. It must be provided, + but it can also be sourced from the `ALICLOUD_SECRET_KEY` environment + variable. -- `region` (string) - This is the Alicloud region. It must be provided, but it - can also be sourced from the `ALICLOUD_REGION` environment variables. +- `region` (string) - This is the Alicloud region. It must be provided, but it + can also be sourced from the `ALICLOUD_REGION` environment variables. -- `image_name` (string) - The name of the user-defined image, [2, 128] English - or Chinese characters. It must begin with an uppercase/lowercase letter or - a Chinese character, and may contain numbers, `_` or `-`. It cannot begin - with http:// or https://. +- `image_name` (string) - The name of the user-defined image, \[2, 128\] English + or Chinese characters. It must begin with an uppercase/lowercase letter or + a Chinese character, and may contain numbers, `_` or `-`. It cannot begin + with or . -- `oss_bucket_name` (string) - The name of the OSS bucket where the RAW or VHD - file will be copied to for import. If the Bucket isn't exist, post-process - will create it for you. +- `oss_bucket_name` (string) - The name of the OSS bucket where the RAW or VHD + file will be copied to for import. If the Bucket isn't exist, post-process + will create it for you. -- `image_os_type` (string) - Type of the OS linux/windows +- `image_os_type` (string) - Type of the OS linux/windows -- `image_platform` (string) - platform such `CentOS` +- `image_platform` (string) - platform such `CentOS` -- `image_architecture` (string) - Platform type of the image system:i386 - | x86_64 +- `image_architecture` (string) - Platform type of the image system:i386 + | x86\_64 -- `format` (string) - The format of the image for import, now alicloud only - support RAW and VHD. +- `format` (string) - The format of the image for import, now alicloud only + support RAW and VHD. ### Optional: -- `oss_key_name` (string) - The name of the object key in `oss_bucket_name` - where the RAW or VHD file will be copied to for import. +- `oss_key_name` (string) - The name of the object key in `oss_bucket_name` + where the RAW or VHD file will be copied to for import. -- `skip_clean` (boolean) - Whether we should skip removing the RAW or VHD file - uploaded to OSS after the import process has completed. `true` means that we - should leave it in the OSS bucket, `false` means to clean it out. Defaults to - `false`. +- `skip_clean` (boolean) - Whether we should skip removing the RAW or VHD file + uploaded to OSS after the import process has completed. `true` means that we + should leave it in the OSS bucket, `false` means to clean it out. Defaults to + `false`. -- `image_description` (string) - The description of the image, with a length - limit of 0 to 256 characters. Leaving it blank means null, which is the - default value. It cannot begin with http:// or https://. +- `image_description` (string) - The description of the image, with a length + limit of 0 to 256 characters. Leaving it blank means null, which is the + default value. It cannot begin with or . -- `image_force_delete` (bool) - If this value is true, when the target image - name is duplicated with an existing image, it will delete the existing image - and then create the target image, otherwise, the creation will fail. The - default value is false. +- `image_force_delete` (bool) - If this value is true, when the target image + name is duplicated with an existing image, it will delete the existing image + and then create the target image, otherwise, the creation will fail. The + default value is false. -- `image_system_size` (int) - Size of the system disk, in GB, values range: - - cloud - 5 ~ 2000 - - cloud_efficiency - 20 ~ 2048 - - cloud_ssd - 20 ~ 2048 +- `image_system_size` (int) - Size of the system disk, in GB, values range: + - cloud - 5 ~ 2000 + - cloud\_efficiency - 20 ~ 2048 + - cloud\_ssd - 20 ~ 2048 ## Basic Example @@ -89,7 +89,7 @@ artifact. The user must have the role `AliyunECSImageImportDefaultRole` with role and policy for you if you have the privilege, otherwise, you have to ask the administrator configure for you in advance. -```json +``` json "post-processors":[ { "access_key":"{{user `access_key`}}", diff --git a/website/source/docs/post-processors/amazon-import.html.md b/website/source/docs/post-processors/amazon-import.html.md index 2c3810151..fdbba8264 100644 --- a/website/source/docs/post-processors/amazon-import.html.md +++ b/website/source/docs/post-processors/amazon-import.html.md @@ -1,10 +1,10 @@ --- +description: | + The Packer Amazon Import post-processor takes an OVA artifact from various + builders and imports it to an AMI available to Amazon Web Services EC2. layout: docs -sidebar_current: docs-post-processors-amazon-import -page_title: Amazon Import - Post-Processors -description: |- - The Packer Amazon Import post-processor takes an OVA artifact from various - builders and imports it to an AMI available to Amazon Web Services EC2. +page_title: 'Amazon Import - Post-Processors' +sidebar_current: 'docs-post-processors-amazon-import' --- # Amazon Import Post-Processor @@ -13,7 +13,7 @@ Type: `amazon-import` The Packer Amazon Import post-processor takes an OVA artifact from various builders and imports it to an AMI available to Amazon Web Services EC2. -~> This post-processor is for advanced users. It depends on specific IAM roles inside AWS and is best used with images that operate with the EC2 configuration model (eg, cloud-init for Linux systems). Please ensure you read the [prerequisites for import](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/VMImportPrerequisites.html) before using this post-processor. +~> This post-processor is for advanced users. It depends on specific IAM roles inside AWS and is best used with images that operate with the EC2 configuration model (eg, cloud-init for Linux systems). Please ensure you read the [prerequisites for import](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/VMImportPrerequisites.html) before using this post-processor. ## How Does it Work? @@ -31,48 +31,48 @@ Within each category, the available configuration keys are alphabetized. Required: -- `access_key` (string) - The access key used to communicate with AWS. [Learn +- `access_key` (string) - The access key used to communicate with AWS. [Learn how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials) -- `region` (string) - The name of the region, such as `us-east-1` in which to upload the OVA file to S3 and create the AMI. A list of valid regions can be obtained with AWS CLI tools or by consulting the AWS website. +- `region` (string) - The name of the region, such as `us-east-1` in which to upload the OVA file to S3 and create the AMI. A list of valid regions can be obtained with AWS CLI tools or by consulting the AWS website. -- `s3_bucket_name` (string) - The name of the S3 bucket where the OVA file will be copied to for import. This bucket must exist when the post-processor is run. +- `s3_bucket_name` (string) - The name of the S3 bucket where the OVA file will be copied to for import. This bucket must exist when the post-processor is run. -- `secret_key` (string) - The secret key used to communicate with AWS. [Learn +- `secret_key` (string) - The secret key used to communicate with AWS. [Learn how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials) Optional: -- `ami_description` (string) - The description to set for the resulting - imported AMI. By default this description is generated by the AMI import +- `ami_description` (string) - The description to set for the resulting + imported AMI. By default this description is generated by the AMI import process. -- `ami_groups` (array of strings) - A list of groups that have access to - launch the imported AMI. By default no groups have permission to launch the - AMI. `all` will make the AMI publically accessible. AWS currently doesn't +- `ami_groups` (array of strings) - A list of groups that have access to + launch the imported AMI. By default no groups have permission to launch the + AMI. `all` will make the AMI publically accessible. AWS currently doesn't accept any value other than "all". -- `ami_name` (string) - The name of the ami within the console. If not - specified, this will default to something like `ami-import-sfwerwf`. - Please note, specifying this option will result in a slightly longer +- `ami_name` (string) - The name of the ami within the console. If not + specified, this will default to something like `ami-import-sfwerwf`. + Please note, specifying this option will result in a slightly longer execution time. -- `ami_users` (array of strings) - A list of account IDs that have access to - launch the imported AMI. By default no additional users other than the user +- `ami_users` (array of strings) - A list of account IDs that have access to + launch the imported AMI. By default no additional users other than the user importing the AMI has permission to launch it. -- `license_type` (string) - The license type to be used for the Amazon Machine - Image (AMI) after importing. Valid values: `AWS` or `BYOL` (default). - For more details regarding licensing, see - [Prerequisites](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/VMImportPrerequisites.html) +- `license_type` (string) - The license type to be used for the Amazon Machine + Image (AMI) after importing. Valid values: `AWS` or `BYOL` (default). + For more details regarding licensing, see + [Prerequisites](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/VMImportPrerequisites.html) in the VM Import/Export User Guide. - `mfa_code` (string) - The MFA [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm) code. This should probably be a user variable since it changes all the time. -- `s3_key_name` (string) - The name of the key in `s3_bucket_name` where the - OVA file will be copied to for import. If not specified, this will default - to "packer-import-{{timestamp}}.ova". This key (ie, the uploaded OVA) will +- `s3_key_name` (string) - The name of the key in `s3_bucket_name` where the + OVA file will be copied to for import. If not specified, this will default + to "packer-import-{{timestamp}}.ova". This key (ie, the uploaded OVA) will be removed after import, unless `skip_clean` is `true`. - `skip_clean` (boolean) - Whether we should skip removing the OVA file uploaded to S3 after the @@ -90,7 +90,7 @@ Optional: Here is a basic example. This assumes that the builder has produced an OVA artifact for us to work with, and IAM roles for import exist in the AWS account being imported into. -```json +``` json { "type": "amazon-import", "access_key": "YOUR KEY HERE", @@ -104,7 +104,7 @@ Here is a basic example. This assumes that the builder has produced an OVA artif } ``` --> **Note:** Packer can also read the access key and secret access key from +-> **Note:** Packer can also read the access key and secret access key from environmental variables. See the configuration reference in the section above for more information on what environmental variables Packer will look for. diff --git a/website/source/docs/post-processors/artifice.html.md b/website/source/docs/post-processors/artifice.html.md index 8e6bbb59d..3d01d1bb9 100644 --- a/website/source/docs/post-processors/artifice.html.md +++ b/website/source/docs/post-processors/artifice.html.md @@ -1,14 +1,14 @@ --- +description: | + The artifice post-processor overrides the artifact list from an upstream + builder or post-processor. All downstream post-processors will see the new + artifacts you specify. The primary use-case is to build artifacts inside a + packer builder -- for example, spinning up an EC2 instance to build a docker + container -- and then extracting the docker container and throwing away the + EC2 instance. layout: docs -sidebar_current: docs-post-processors-artifice -page_title: Artifice - Post-Processors -description: |- - The artifice post-processor overrides the artifact list from an upstream - builder or post-processor. All downstream post-processors will see the new - artifacts you specify. The primary use-case is to build artifacts inside a - packer builder -- for example, spinning up an EC2 instance to build a docker - container -- and then extracting the docker container and throwing away the - EC2 instance. +page_title: 'Artifice - Post-Processors' +sidebar_current: 'docs-post-processors-artifice' --- # Artifice Post-Processor @@ -37,12 +37,12 @@ jars, binaries, tarballs, msi installers, and more. Artifice helps you tie together a few other packer features: -- A builder, which spins up a VM (or container) to build your artifact -- A provisioner, which performs the steps to create your artifact -- A file provisioner, which downloads the artifact from the VM -- The artifice post-processor, which identifies which files have been +- A builder, which spins up a VM (or container) to build your artifact +- A provisioner, which performs the steps to create your artifact +- A file provisioner, which downloads the artifact from the VM +- The artifice post-processor, which identifies which files have been downloaded from the VM -- Additional post-processors, which push the artifact to Atlas, Docker +- Additional post-processors, which push the artifact to Atlas, Docker hub, etc. You will want to perform as much work as possible inside the VM. Ideally the @@ -55,7 +55,7 @@ The configuration allows you to specify which files comprise your artifact. ### Required: -- `files` (array of strings) - A list of files that comprise your artifact. +- `files` (array of strings) - A list of files that comprise your artifact. These files must exist on your local disk after the provisioning phase of packer is complete. These will replace any of the builder's original artifacts (such as a VM snapshot). @@ -64,16 +64,16 @@ The configuration allows you to specify which files comprise your artifact. This minimal example: -1. Spins up a cloned VMware virtual machine -1. Installs a [consul](https://www.consul.io/) release -1. Downloads the consul binary -1. Packages it into a `.tar.gz` file -1. Uploads it to Atlas. +1. Spins up a cloned VMware virtual machine +2. Installs a [consul](https://www.consul.io/) release +3. Downloads the consul binary +4. Packages it into a `.tar.gz` file +5. Uploads it to Atlas. VMX is a fast way to build and test locally, but you can easily substitute another builder. -```json +``` json { "builders": [ { @@ -128,7 +128,7 @@ proceeding artifact is passed to subsequent post-processors. If you use only one set of square braces the post-processors will run individually against the build artifact (the vmx file in this case) and it will not have the desired result. -```json +``` json { "post-processors": [ [ // <--- Start post-processor chain diff --git a/website/source/docs/post-processors/atlas.html.md b/website/source/docs/post-processors/atlas.html.md index 52df8a836..b54f1e45a 100644 --- a/website/source/docs/post-processors/atlas.html.md +++ b/website/source/docs/post-processors/atlas.html.md @@ -1,11 +1,11 @@ --- +description: | + The Atlas post-processor for Packer receives an artifact from a Packer build + and uploads it to Atlas. Atlas hosts and serves artifacts, allowing you to + version and distribute them in a simple way. layout: docs -sidebar_current: docs-post-processors-atlas -page_title: Atlas - Post-Processor -description: |- - The Atlas post-processor for Packer receives an artifact from a Packer build - and uploads it to Atlas. Atlas hosts and serves artifacts, allowing you to - version and distribute them in a simple way. +page_title: 'Atlas - Post-Processor' +sidebar_current: 'docs-post-processors-atlas' --- # Atlas Post-Processor @@ -22,7 +22,7 @@ You can also use the push command to [run packer builds in Atlas](/docs/commands/push.html). The push command and Atlas post-processor can be used together or independently. -~> If you'd like to publish a Vagrant box to [Vagrant Cloud](https://vagrantcloud.com), you must use the [`vagrant-cloud`](/docs/post-processors/vagrant-cloud.html) post-processor. +~> If you'd like to publish a Vagrant box to [Vagrant Cloud](https://vagrantcloud.com), you must use the [`vagrant-cloud`](/docs/post-processors/vagrant-cloud.html) post-processor. ## Workflow @@ -34,13 +34,13 @@ location in Atlas. Here is an example workflow: -1. Packer builds an AMI with the [Amazon AMI +1. Packer builds an AMI with the [Amazon AMI builder](/docs/builders/amazon.html) -1. The `atlas` post-processor takes the resulting AMI and uploads it to Atlas. +2. The `atlas` post-processor takes the resulting AMI and uploads it to Atlas. The `atlas` post-processor is configured with the name of the AMI, for example `hashicorp/foobar`, to create the artifact in Atlas or update the version if the artifact already exists -1. The new version is ready and available to be used in deployments with a +3. The new version is ready and available to be used in deployments with a tool like [Terraform](https://www.terraform.io) ## Configuration @@ -49,12 +49,12 @@ The configuration allows you to specify and access the artifact in Atlas. ### Required: -- `artifact` (string) - The shorthand tag for your artifact that maps to +- `artifact` (string) - The shorthand tag for your artifact that maps to Atlas, i.e `hashicorp/foobar` for `atlas.hashicorp.com/hashicorp/foobar`. You must have access to the organization—hashicorp in this example—in order to add an artifact to the organization in Atlas. -- `artifact_type` (string) - For uploading artifacts to Atlas. +- `artifact_type` (string) - For uploading artifacts to Atlas. `artifact_type` can be set to any unique identifier, however, the following are recommended for consistency - `amazon.image`, `azure.image`, `cloudstack.image`, `digitalocean.image`, `docker.image`, @@ -64,38 +64,38 @@ The configuration allows you to specify and access the artifact in Atlas. ### Optional: -- `token` (string) - Your access token for the Atlas API. +- `token` (string) - Your access token for the Atlas API. --> Login to Atlas to [generate an Atlas +-> Login to Atlas to [generate an Atlas Token](https://atlas.hashicorp.com/settings/tokens). The most convenient way to configure your token is to set it to the `ATLAS_TOKEN` environment variable, but you can also use `token` configuration option. -- `atlas_url` (string) - Override the base URL for Atlas. This is useful if +- `atlas_url` (string) - Override the base URL for Atlas. This is useful if you're using Atlas Enterprise in your own network. Defaults to `https://atlas.hashicorp.com/api/v1`. -- `metadata` (map) - Send metadata about the artifact. +- `metadata` (map) - Send metadata about the artifact. - - `description` (string) - Inside the metadata blob you can add a information + - `description` (string) - Inside the metadata blob you can add a information about the uploaded artifact to Atlas. This will be reflected in the box description on Atlas. - - `provider` (string) - Used by Atlas to help determine, what should be used + - `provider` (string) - Used by Atlas to help determine, what should be used to run the artifact. - - `version` (string) - Used by Atlas to give a semantic version to the + - `version` (string) - Used by Atlas to give a semantic version to the uploaded artifact. ## Environment Variables -- `ATLAS_CAFILE` (path) - This should be a path to an X.509 PEM-encoded public key. If specified, this will be used to validate the certificate authority that signed certificates used by an Atlas installation. +- `ATLAS_CAFILE` (path) - This should be a path to an X.509 PEM-encoded public key. If specified, this will be used to validate the certificate authority that signed certificates used by an Atlas installation. -- `ATLAS_CAPATH` - This should be a path which contains an X.509 PEM-encoded public key file. If specified, this will be used to validate the certificate authority that signed certificates used by an Atlas installation. +- `ATLAS_CAPATH` - This should be a path which contains an X.509 PEM-encoded public key file. If specified, this will be used to validate the certificate authority that signed certificates used by an Atlas installation. ### Example Configuration -```json +``` json { "variables": { "aws_access_key": "ACCESS_KEY_HERE", diff --git a/website/source/docs/post-processors/checksum.html.md b/website/source/docs/post-processors/checksum.html.md index 7e72a0937..86912f828 100644 --- a/website/source/docs/post-processors/checksum.html.md +++ b/website/source/docs/post-processors/checksum.html.md @@ -1,14 +1,14 @@ --- +description: | + The checksum post-processor computes specified checksum for the artifact list + from an upstream builder or post-processor. All downstream post-processors + will see the new artifacts. The primary use-case is compute checksum for + artifacts allows to verify it later. So firstly this post-processor get + artifact, compute it checksum and pass to next post-processor original + artifacts and checksum files. layout: docs -sidebar_current: docs-post-processors-checksum -page_title: Checksum - Post-Processors -description: |- - The checksum post-processor computes specified checksum for the artifact list - from an upstream builder or post-processor. All downstream post-processors - will see the new artifacts. The primary use-case is compute checksum for - artifacts allows to verify it later. So firstly this post-processor get - artifact, compute it checksum and pass to next post-processor original - artifacts and checksum files. +page_title: 'Checksum - Post-Processors' +sidebar_current: 'docs-post-processors-checksum' --- # Checksum Post-Processor @@ -32,7 +32,7 @@ post-processor. The example below is fully functional. -```json +``` json { "type": "checksum" } @@ -42,15 +42,15 @@ The example below is fully functional. Optional parameters: -- `checksum_types` (array of strings) - An array of strings of checksum types -to compute. Allowed values are md5, sha1, sha224, sha256, sha384, sha512. -- `output` (string) - Specify filename to store checksums. This defaults to +- `checksum_types` (array of strings) - An array of strings of checksum types + to compute. Allowed values are md5, sha1, sha224, sha256, sha384, sha512. +- `output` (string) - Specify filename to store checksums. This defaults to `packer_{{.BuildName}}_{{.BuilderType}}_{{.ChecksumType}}.checksum`. For example, if you had a builder named `database`, you might see the file written as `packer_database_docker_md5.checksum`. The following variables are available to use in the output template: - * `BuildName`: The name of the builder that produced the artifact. - * `BuilderType`: The type of builder used to produce the artifact. - * `ChecksumType`: The type of checksums the file contains. This should be + - `BuildName`: The name of the builder that produced the artifact. + - `BuilderType`: The type of builder used to produce the artifact. + - `ChecksumType`: The type of checksums the file contains. This should be used if you have more than one value in `checksum_types`. diff --git a/website/source/docs/post-processors/compress.html.md b/website/source/docs/post-processors/compress.html.md index 566801a0e..90b9b308f 100644 --- a/website/source/docs/post-processors/compress.html.md +++ b/website/source/docs/post-processors/compress.html.md @@ -1,10 +1,10 @@ --- +description: | + The Packer compress post-processor takes an artifact with files (such as from + VMware or VirtualBox) and compresses the artifact into a single archive. layout: docs -sidebar_current: docs-post-processors-compress -page_title: Compress - Post-Processors -description: |- - The Packer compress post-processor takes an artifact with files (such as from - VMware or VirtualBox) and compresses the artifact into a single archive. +page_title: 'Compress - Post-Processors' +sidebar_current: 'docs-post-processors-compress' --- # Compress Post-Processor @@ -22,7 +22,7 @@ By default, packer will build archives in `.tar.gz` format with the following filename: `packer_{{.BuildName}}_{{.BuilderType}}`. If you want to change this you will need to specify the `output` option. -- `output` (string) - The path to save the compressed archive. The archive +- `output` (string) - The path to save the compressed archive. The archive format is inferred from the filename. E.g. `.tar.gz` will be a gzipped tarball. `.zip` will be a zip file. If the extension can't be detected packer defaults to `.tar.gz` behavior but will not change @@ -32,14 +32,14 @@ you will need to specify the `output` option. you are executing multiple builders in parallel you should make sure `output` is unique for each one. For example `packer_{{.BuildName}}.zip`. -- `format` (string) - Disable archive format autodetection and use provided +- `format` (string) - Disable archive format autodetection and use provided string. -- `compression_level` (integer) - Specify the compression level, for +- `compression_level` (integer) - Specify the compression level, for algorithms that support it, from 1 through 9 inclusive. Typically higher compression levels take longer but produce smaller files. Defaults to `6` -- `keep_input_artifact` (boolean) - Keep source files; defaults to `false` +- `keep_input_artifact` (boolean) - Keep source files; defaults to `false` ### Supported Formats @@ -52,21 +52,21 @@ compress. Some minimal examples are shown below, showing only the post-processor configuration: -```json +``` json { "type": "compress", "output": "archive.tar.lz4" } ``` -```json +``` json { "type": "compress", "output": "{{.BuildName}}_bundle.zip" } ``` -```json +``` json { "type": "compress", "output": "log_{{.BuildName}}.gz", diff --git a/website/source/docs/post-processors/docker-import.html.md b/website/source/docs/post-processors/docker-import.html.md index 6757473f6..e4805406c 100644 --- a/website/source/docs/post-processors/docker-import.html.md +++ b/website/source/docs/post-processors/docker-import.html.md @@ -1,12 +1,12 @@ --- +description: | + The Packer Docker import post-processor takes an artifact from the docker + builder and imports it with Docker locally. This allows you to apply a + repository and tag to the image and lets you use the other Docker + post-processors such as docker-push to push the image to a registry. layout: docs -sidebar_current: docs-post-processors-docker-import -page_title: Docker Import - Post-Processors -description: |- - The Packer Docker import post-processor takes an artifact from the docker - builder and imports it with Docker locally. This allows you to apply a - repository and tag to the image and lets you use the other Docker - post-processors such as docker-push to push the image to a registry. +page_title: 'Docker Import - Post-Processors' +sidebar_current: 'docs-post-processors-docker-import' --- # Docker Import Post-Processor @@ -25,15 +25,15 @@ registry. The configuration for this post-processor only requires a `repository`, a `tag` is optional. -- `repository` (string) - The repository of the imported image. +- `repository` (string) - The repository of the imported image. -- `tag` (string) - The tag for the imported image. By default this is not set. +- `tag` (string) - The tag for the imported image. By default this is not set. ## Example An example is shown below, showing only the post-processor configuration: -```json +``` json { "type": "docker-import", "repository": "mitchellh/packer", diff --git a/website/source/docs/post-processors/docker-push.html.md b/website/source/docs/post-processors/docker-push.html.md index da6bffff1..6e22bdd79 100644 --- a/website/source/docs/post-processors/docker-push.html.md +++ b/website/source/docs/post-processors/docker-push.html.md @@ -1,10 +1,10 @@ --- +description: | + The Packer Docker push post-processor takes an artifact from the docker-import + post-processor and pushes it to a Docker registry. layout: docs -sidebar_current: docs-post-processors-docker-push -page_title: Docker Push - Post-Processors -description: |- - The Packer Docker push post-processor takes an artifact from the docker-import - post-processor and pushes it to a Docker registry. +page_title: 'Docker Push - Post-Processors' +sidebar_current: 'docs-post-processors-docker-push' --- # Docker Push Post-Processor @@ -19,41 +19,41 @@ pushes it to a Docker registry. This post-processor has only optional configuration: -- `aws_access_key` (string) - The AWS access key used to communicate with AWS. +- `aws_access_key` (string) - The AWS access key used to communicate with AWS. [Learn how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials) -- `aws_secret_key` (string) - The AWS secret key used to communicate with AWS. +- `aws_secret_key` (string) - The AWS secret key used to communicate with AWS. [Learn how to set this.](/docs/builders/amazon.html#specifying-amazon-credentials) -- `aws_token` (string) - The AWS access token to use. This is different from the +- `aws_token` (string) - The AWS access token to use. This is different from the access key and secret key. If you're not sure what this is, then you probably don't need it. This will also be read from the `AWS_SESSION_TOKEN` environmental variable. -- `ecr_login` (boolean) - Defaults to false. If true, the post-processor +- `ecr_login` (boolean) - Defaults to false. If true, the post-processor will login in order to push the image to [Amazon EC2 Container Registry (ECR)](https://aws.amazon.com/ecr/). The post-processor only logs in for the duration of the push. If true `login_server` is required and `login`, `login_username`, and `login_password` will be ignored. -- `login` (boolean) - Defaults to false. If true, the post-processor will +- `login` (boolean) - Defaults to false. If true, the post-processor will login prior to pushing. For log into ECR see `ecr_login`. -- `login_email` (string) - The email to use to authenticate to login. +- `login_email` (string) - The email to use to authenticate to login. -- `login_username` (string) - The username to use to authenticate to login. +- `login_username` (string) - The username to use to authenticate to login. -- `login_password` (string) - The password to use to authenticate to login. +- `login_password` (string) - The password to use to authenticate to login. -- `login_server` (string) - The server address to login to. +- `login_server` (string) - The server address to login to. -Note: When using _Docker Hub_ or _Quay_ registry servers, `login` must to be +Note: When using *Docker Hub* or *Quay* registry servers, `login` must to be set to `true` and `login_email`, `login_username`, **and** `login_password` must to be set to your registry credentials. When using Docker Hub, `login_server` can be omitted. --> **Note:** If you login using the credentials above, the post-processor +-> **Note:** If you login using the credentials above, the post-processor will automatically log you out afterwards (just the server specified). ## Example diff --git a/website/source/docs/post-processors/docker-save.html.md b/website/source/docs/post-processors/docker-save.html.md index 7b9507539..6f709e9f3 100644 --- a/website/source/docs/post-processors/docker-save.html.md +++ b/website/source/docs/post-processors/docker-save.html.md @@ -1,12 +1,12 @@ --- +description: | + The Packer Docker Save post-processor takes an artifact from the docker + builder that was committed and saves it to a file. This is similar to + exporting the Docker image directly from the builder, except that it preserves + the hierarchy of images and metadata. layout: docs -sidebar_current: docs-post-processors-docker-save -page_title: Docker Save - Post-Processors -description: |- - The Packer Docker Save post-processor takes an artifact from the docker - builder that was committed and saves it to a file. This is similar to - exporting the Docker image directly from the builder, except that it preserves - the hierarchy of images and metadata. +page_title: 'Docker Save - Post-Processors' +sidebar_current: 'docs-post-processors-docker-save' --- # Docker Save Post-Processor @@ -26,13 +26,13 @@ familiar with this and vice versa. The configuration for this post-processor only requires one option. -- `path` (string) - The path to save the image. +- `path` (string) - The path to save the image. ## Example An example is shown below, showing only the post-processor configuration: -```json +``` json { "type": "docker-save", "path": "foo.tar" diff --git a/website/source/docs/post-processors/docker-tag.html.md b/website/source/docs/post-processors/docker-tag.html.md index c5ae93937..6aa9e1935 100644 --- a/website/source/docs/post-processors/docker-tag.html.md +++ b/website/source/docs/post-processors/docker-tag.html.md @@ -1,12 +1,12 @@ --- +description: | + The Packer Docker Tag post-processor takes an artifact from the docker builder + that was committed and tags it into a repository. This allows you to use the + other Docker post-processors such as docker-push to push the image to a + registry. layout: docs -sidebar_current: docs-post-processors-docker-tag -page_title: Docker Tag - Post-Processors -description: |- - The Packer Docker Tag post-processor takes an artifact from the docker builder - that was committed and tags it into a repository. This allows you to use the - other Docker post-processors such as docker-push to push the image to a - registry. +page_title: 'Docker Tag - Post-Processors' +sidebar_current: 'docs-post-processors-docker-tag' --- # Docker Tag Post-Processor @@ -28,11 +28,11 @@ that this works with committed resources, rather than exported. The configuration for this post-processor requires `repository`, all other settings are optional. -- `repository` (string) - The repository of the image. +- `repository` (string) - The repository of the image. -- `tag` (string) - The tag for the image. By default this is not set. +- `tag` (string) - The tag for the image. By default this is not set. -- `force` (boolean) - If true, this post-processor forcibly tag the image even +- `force` (boolean) - If true, this post-processor forcibly tag the image even if tag name is collided. Default to `false`. But it will be ignored if Docker >= 1.12.0 was detected, since the `force` option was removed after 1.12.0. [reference](https://docs.docker.com/engine/deprecated/#/f-flag-on-docker-tag) @@ -41,7 +41,7 @@ are optional. An example is shown below, showing only the post-processor configuration: -```json +``` json { "type": "docker-tag", "repository": "mitchellh/packer", diff --git a/website/source/docs/post-processors/googlecompute-export.html.md b/website/source/docs/post-processors/googlecompute-export.html.md index 86c4ea34c..071d188fb 100644 --- a/website/source/docs/post-processors/googlecompute-export.html.md +++ b/website/source/docs/post-processors/googlecompute-export.html.md @@ -1,12 +1,12 @@ --- +description: | + The Google Compute Image Exporter post-processor exports an image from a + Packer googlecompute builder run and uploads it to Google Cloud Storage. The + exported images can be easily shared and uploaded to other Google Cloud + Projects. layout: docs -sidebar_current: docs-post-processors-googlecompute-export -page_title: Google Compute Image Exporter - Post-Processors -description: |- - The Google Compute Image Exporter post-processor exports an image from a - Packer googlecompute builder run and uploads it to Google Cloud Storage. The - exported images can be easily shared and uploaded to other Google Cloud - Projects. +page_title: 'Google Compute Image Exporter - Post-Processors' +sidebar_current: 'docs-post-processors-googlecompute-export' --- # Google Compute Image Exporter Post-Processor @@ -25,17 +25,16 @@ to the provided GCS `paths` using the same credentials. As such, the authentication credentials that built the image must have write permissions to the GCS `paths`. - ## Configuration ### Required -- `paths` (list of string) - The list of GCS paths, e.g. +- `paths` (list of string) - The list of GCS paths, e.g. 'gs://mybucket/path/to/file.tar.gz', where the image will be exported. ### Optional -- `keep_input_artifact` (bool) - If true, do not delete the Google Compute Engine +- `keep_input_artifact` (bool) - If true, do not delete the Google Compute Engine (GCE) image being exported. ## Basic Example @@ -50,7 +49,7 @@ In order for this example to work, the account associated with `account.json` mu have write access to both `gs://mybucket1/path/to/file1.tar.gz` and `gs://mybucket2/path/to/file2.tar.gz`. -```json +``` json { "builders": [ { diff --git a/website/source/docs/post-processors/index.html.md b/website/source/docs/post-processors/index.html.md index aa3e0ce07..c555be3bd 100644 --- a/website/source/docs/post-processors/index.html.md +++ b/website/source/docs/post-processors/index.html.md @@ -1,10 +1,10 @@ --- +description: | + Post-processors run after the image is built by the builder and provisioned by + the provisioner(s). layout: docs -page_title: Post-Processors -sidebar_current: docs-post-processors -description: |- - Post-processors run after the image is built by the builder and provisioned by - the provisioner(s). +page_title: 'Post-Processors' +sidebar_current: 'docs-post-processors' --- # Post-Processors diff --git a/website/source/docs/post-processors/manifest.html.md b/website/source/docs/post-processors/manifest.html.md index 17406f0b0..9d930d241 100644 --- a/website/source/docs/post-processors/manifest.html.md +++ b/website/source/docs/post-processors/manifest.html.md @@ -1,10 +1,10 @@ --- +description: | + The manifest post-processor writes a JSON file with the build artifacts and + IDs from a packer run. layout: docs -sidebar_current: docs-post-processors-manifest -page_title: Manifest - Post-Processors -description: |- - The manifest post-processor writes a JSON file with the build artifacts and - IDs from a packer run. +page_title: 'Manifest - Post-Processors' +sidebar_current: 'docs-post-processors-manifest' --- # Manifest Post-Processor @@ -23,14 +23,14 @@ You can specify manifest more than once and write each build to its own file, or ### Optional: -- `output` (string) The manifest will be written to this file. This defaults to `packer-manifest.json`. -- `strip_path` (bool) Write only filename without the path to the manifest file. This defaults to false. +- `output` (string) The manifest will be written to this file. This defaults to `packer-manifest.json`. +- `strip_path` (bool) Write only filename without the path to the manifest file. This defaults to false. ### Example Configuration You can simply add `{"type":"manifest"}` to your post-processor section. Below is a more verbose example: -```json +``` json { "post-processors": [ { diff --git a/website/source/docs/post-processors/shell-local.html.md b/website/source/docs/post-processors/shell-local.html.md index 1d4fbfe14..74e21993d 100644 --- a/website/source/docs/post-processors/shell-local.html.md +++ b/website/source/docs/post-processors/shell-local.html.md @@ -1,10 +1,10 @@ --- +description: | + The shell-local Packer post processor enables users to do some post processing + after artifacts have been built. layout: docs -sidebar_current: docs-post-processors-shell-local -page_title: Local Shell - Post-Processors -description: |- - The shell-local Packer post processor enables users to do some post processing - after artifacts have been built. +page_title: 'Local Shell - Post-Processors' +sidebar_current: 'docs-post-processors-shell-local' --- # Local Shell Post Processor @@ -19,7 +19,7 @@ some task with the packer outputs. The example below is fully functional. -```json +``` json { "type": "shell-local", "inline": ["echo foo"] @@ -33,36 +33,36 @@ required element is either "inline" or "script". Every other option is optional. Exactly *one* of the following is required: -- `inline` (array of strings) - This is an array of commands to execute. The +- `inline` (array of strings) - This is an array of commands to execute. The commands are concatenated by newlines and turned into a single file, so they are all executed within the same context. This allows you to change directories in one command and use something in the directory in the next and so on. Inline scripts are the easiest way to pull off simple tasks within the machine. -- `script` (string) - The path to a script to execute. This path can be +- `script` (string) - The path to a script to execute. This path can be absolute or relative. If it is relative, it is relative to the working directory when Packer is executed. -- `scripts` (array of strings) - An array of scripts to execute. The scripts +- `scripts` (array of strings) - An array of scripts to execute. The scripts will be executed in the order specified. Each script is executed in isolation, so state such as variables from one script won't carry on to the next. Optional parameters: -- `environment_vars` (array of strings) - An array of key/value pairs to +- `environment_vars` (array of strings) - An array of key/value pairs to inject prior to the execute\_command. The format should be `key=value`. Packer injects some environmental variables by default into the environment, as well, which are covered in the section below. -- `execute_command` (string) - The command to use to execute the script. By +- `execute_command` (string) - The command to use to execute the script. By default this is `chmod +x "{{.Script}}"; {{.Vars}} "{{.Script}}"`. The value of this is treated as [template engine](/docs/templates/engine.html). There are two available variables: `Script`, which is the path to the script to run, `Vars`, which is the list of `environment_vars`, if configured. -- `inline_shebang` (string) - The +- `inline_shebang` (string) - The [shebang](http://en.wikipedia.org/wiki/Shebang_%28Unix%29) value to use when running commands specified by `inline`. By default, this is `/bin/sh -e`. If you're not using `inline`, then this configuration has no effect. @@ -82,11 +82,11 @@ In addition to being able to specify custom environmental variables using the `environment_vars` configuration, the provisioner automatically defines certain commonly useful environmental variables: -- `PACKER_BUILD_NAME` is set to the name of the build that Packer is running. +- `PACKER_BUILD_NAME` is set to the name of the build that Packer is running. This is most useful when Packer is making multiple builds and you want to distinguish them slightly from a common provisioning script. -- `PACKER_BUILDER_TYPE` is the type of the builder that was used to create the +- `PACKER_BUILDER_TYPE` is the type of the builder that was used to create the machine that the script is running on. This is useful if you want to run only certain parts of the script on systems built with certain builders. @@ -99,7 +99,7 @@ you much time in the process. ### Once Per Builder -The `shell-local` script(s) you pass are run once per builder. That means that +The `shell-local` script(s) you pass are run once per builder. That means that if you have an `amazon-ebs` builder and a `docker` builder, your script will be run twice. If you have 3 builders, it will run 3 times, once for each builder. @@ -112,7 +112,7 @@ of files produced by a `builder` to a json file after each `builder` is run. For example, if you wanted to package a file from the file builder into a tarball, you might wright this: -```json +``` json { "builders": [ { diff --git a/website/source/docs/post-processors/vagrant-cloud.html.md b/website/source/docs/post-processors/vagrant-cloud.html.md index f0e6f75a2..0fa535710 100644 --- a/website/source/docs/post-processors/vagrant-cloud.html.md +++ b/website/source/docs/post-processors/vagrant-cloud.html.md @@ -1,12 +1,12 @@ --- +description: | + The Packer Vagrant Cloud post-processor receives a Vagrant box from the + `vagrant` post-processor and pushes it to Vagrant Cloud. Vagrant Cloud hosts + and serves boxes to Vagrant, allowing you to version and distribute boxes to + an organization in a simple way. layout: docs -sidebar_current: docs-post-processors-vagrant-cloud -page_title: Vagrant Cloud - Post-Processors -description: |- - The Packer Vagrant Cloud post-processor receives a Vagrant box from the - `vagrant` post-processor and pushes it to Vagrant Cloud. Vagrant Cloud hosts - and serves boxes to Vagrant, allowing you to version and distribute boxes to - an organization in a simple way. +page_title: 'Vagrant Cloud - Post-Processors' +sidebar_current: 'docs-post-processors-vagrant-cloud' --- # Vagrant Cloud Post-Processor @@ -33,16 +33,16 @@ and deliver them to your team in some fashion. Here is an example workflow: -1. You use Packer to build a Vagrant Box for the `virtualbox` provider -1. The `vagrant-cloud` post-processor is configured to point to the box - `hashicorp/foobar` on Vagrant Cloud via the `box_tag` configuration -1. The post-processor receives the box from the `vagrant` post-processor -1. It then creates the configured version, or verifies the existence of it, on - Vagrant Cloud -1. A provider matching the name of the Vagrant provider is then created -1. The box is uploaded to Vagrant Cloud -1. The upload is verified -1. The version is released and available to users of the box +1. You use Packer to build a Vagrant Box for the `virtualbox` provider +2. The `vagrant-cloud` post-processor is configured to point to the box + `hashicorp/foobar` on Vagrant Cloud via the `box_tag` configuration +3. The post-processor receives the box from the `vagrant` post-processor +4. It then creates the configured version, or verifies the existence of it, on + Vagrant Cloud +5. A provider matching the name of the Vagrant provider is then created +6. The box is uploaded to Vagrant Cloud +7. The upload is verified +8. The version is released and available to users of the box ## Configuration @@ -51,16 +51,16 @@ on Vagrant Cloud, as well as authentication and version information. ### Required: -- `access_token` (string) - Your access token for the Vagrant Cloud API. This +- `access_token` (string) - Your access token for the Vagrant Cloud API. This can be generated on your [tokens page](https://vagrantcloud.com/account/tokens). If not specified, the environment will be searched. First, `VAGRANT_CLOUD_TOKEN` is checked, and if nothing is found, finally `ATLAS_TOKEN` will be used. -- `box_tag` (string) - The shorthand tag for your box that maps to Vagrant +- `box_tag` (string) - The shorthand tag for your box that maps to Vagrant Cloud, i.e `hashicorp/precise64` for `vagrantcloud.com/hashicorp/precise64` -- `version` (string) - The version number, typically incrementing a +- `version` (string) - The version number, typically incrementing a previous version. The version string is validated based on [Semantic Versioning](http://semver.org/). The string must match a pattern that could be semver, and doesn't validate that the version comes after your @@ -68,19 +68,19 @@ on Vagrant Cloud, as well as authentication and version information. ### Optional: -- `no_release` (string) - If set to true, does not release the version on +- `no_release` (string) - If set to true, does not release the version on Vagrant Cloud, making it active. You can manually release the version via the API or Web UI. Defaults to false. -- `vagrant_cloud_url` (string) - Override the base URL for Vagrant Cloud. This +- `vagrant_cloud_url` (string) - Override the base URL for Vagrant Cloud. This is useful if you're using Vagrant Private Cloud in your own network. Defaults to `https://vagrantcloud.com/api/v1` -- `version_description` (string) - Optionally markdown text used as a +- `version_description` (string) - Optionally markdown text used as a full-length and in-depth description of the version, typically for denoting changes introduced -- `box_download_url` (string) - Optional URL for a self-hosted box. If this is +- `box_download_url` (string) - Optional URL for a self-hosted box. If this is set the box will not be uploaded to the Vagrant Cloud. ## Use with Vagrant Post-Processor @@ -90,7 +90,7 @@ An example configuration is below. Note the use of a doubly-nested array, which ensures that the Vagrant Cloud post-processor is run after the Vagrant post-processor. -```json +``` json { "variables": { "cloud_token": "{{ env `ATLAS_TOKEN` }}", diff --git a/website/source/docs/post-processors/vagrant.html.md b/website/source/docs/post-processors/vagrant.html.md index c2e1b0ca4..deb86c0c8 100644 --- a/website/source/docs/post-processors/vagrant.html.md +++ b/website/source/docs/post-processors/vagrant.html.md @@ -1,12 +1,12 @@ --- +description: | + The Packer Vagrant post-processor takes a build and converts the artifact into + a valid Vagrant box, if it can. This lets you use Packer to automatically + create arbitrarily complex Vagrant boxes, and is in fact how the official + boxes distributed by Vagrant are created. layout: docs -sidebar_current: docs-post-processors-vagrant-box -page_title: Vagrant - Post-Processors -description: |- - The Packer Vagrant post-processor takes a build and converts the artifact into - a valid Vagrant box, if it can. This lets you use Packer to automatically - create arbitrarily complex Vagrant boxes, and is in fact how the official - boxes distributed by Vagrant are created. +page_title: 'Vagrant - Post-Processors' +sidebar_current: 'docs-post-processors-vagrant-box' --- # Vagrant Post-Processor @@ -30,15 +30,15 @@ certain builders into proper boxes for their respective providers. Currently, the Vagrant post-processor can create boxes for the following providers. -- AWS -- DigitalOcean -- Hyper-V -- Parallels -- QEMU -- VirtualBox -- VMware +- AWS +- DigitalOcean +- Hyper-V +- Parallels +- QEMU +- VirtualBox +- VMware --> **Support for additional providers** is planned. If the Vagrant +-> **Support for additional providers** is planned. If the Vagrant post-processor doesn't support creating boxes for a provider you care about, please help by contributing to Packer and adding support for it. @@ -52,19 +52,19 @@ However, if you want to configure things a bit more, the post-processor does expose some configuration options. The available options are listed below, with more details about certain options in following sections. -- `compression_level` (integer) - An integer representing the compression +- `compression_level` (integer) - An integer representing the compression level to use when creating the Vagrant box. Valid values range from 0 to 9, with 0 being no compression and 9 being the best compression. By default, compression is enabled at level 6. -- `include` (array of strings) - Paths to files to include in the Vagrant box. +- `include` (array of strings) - Paths to files to include in the Vagrant box. These files will each be copied into the top level directory of the Vagrant box (regardless of their paths). They can then be used from the Vagrantfile. -- `keep_input_artifact` (boolean) - If set to true, do not delete the +- `keep_input_artifact` (boolean) - If set to true, do not delete the `output_directory` on a successful build. Defaults to false. -- `output` (string) - The full path to the box file that will be created by +- `output` (string) - The full path to the box file that will be created by this post-processor. This is a [configuration template](/docs/templates/engine.html). The variable `Provider` is replaced by the Vagrant provider the box is for. The variable @@ -72,7 +72,7 @@ more details about certain options in following sections. `BuildName` is replaced with the name of the build. By default, the value of this config is `packer_{{.BuildName}}_{{.Provider}}.box`. -- `vagrantfile_template` (string) - Path to a template to use for the +- `vagrantfile_template` (string) - Path to a template to use for the Vagrantfile that is packaged with the box. ## Provider-Specific Overrides @@ -85,7 +85,7 @@ post-processor lets you do this. Specify overrides within the `override` configuration by provider name: -```json +``` json { "type": "vagrant", "compression_level": 1, diff --git a/website/source/docs/post-processors/vsphere.html.md b/website/source/docs/post-processors/vsphere.html.md index 4f5644034..9b16cac79 100644 --- a/website/source/docs/post-processors/vsphere.html.md +++ b/website/source/docs/post-processors/vsphere.html.md @@ -1,10 +1,10 @@ --- +description: | + The Packer vSphere post-processor takes an artifact from the VMware builder + and uploads it to a vSphere endpoint. layout: docs -sidebar_current: docs-post-processors-vsphere -page_title: vSphere - Post-Processors -description: |- - The Packer vSphere post-processor takes an artifact from the VMware builder - and uploads it to a vSphere endpoint. +page_title: 'vSphere - Post-Processors' +sidebar_current: 'docs-post-processors-vsphere' --- # vSphere Post-Processor @@ -22,43 +22,42 @@ each category, the available configuration keys are alphabetized. Required: -- `cluster` (string) - The cluster to upload the VM to. +- `cluster` (string) - The cluster to upload the VM to. -- `datacenter` (string) - The name of the datacenter within vSphere to add the +- `datacenter` (string) - The name of the datacenter within vSphere to add the VM to. -- `datastore` (string) - The name of the datastore to store this VM. This is +- `datastore` (string) - The name of the datastore to store this VM. This is *not required* if `resource_pool` is specified. -- `host` (string) - The vSphere host that will be contacted to perform the +- `host` (string) - The vSphere host that will be contacted to perform the VM upload. -- `password` (string) - Password to use to authenticate to the +- `password` (string) - Password to use to authenticate to the vSphere endpoint. -- `resource_pool` (string) - The resource pool to upload the VM to. This is +- `resource_pool` (string) - The resource pool to upload the VM to. This is *not required*. -- `username` (string) - The username to use to authenticate to the +- `username` (string) - The username to use to authenticate to the vSphere endpoint. -- `vm_name` (string) - The name of the VM once it is uploaded. +- `vm_name` (string) - The name of the VM once it is uploaded. Optional: -- `disk_mode` (string) - Target disk format. See `ovftool` manual for +- `disk_mode` (string) - Target disk format. See `ovftool` manual for available options. By default, "thick" will be used. -- `insecure` (boolean) - Whether or not the connection to vSphere can be done +- `insecure` (boolean) - Whether or not the connection to vSphere can be done over an insecure connection. By default this is false. -- `vm_folder` (string) - The folder within the datastore to store the VM. +- `vm_folder` (string) - The folder within the datastore to store the VM. -- `vm_network` (string) - The name of the VM network this VM will be - added to. +- `vm_network` (string) - The name of the VM network this VM will be + added to. -- `overwrite` (boolean) - If it's true force the system to overwrite the - existing files instead create new ones. Default is false +- `overwrite` (boolean) - If it's true force the system to overwrite the + existing files instead create new ones. Default is false -- `options` (array of strings) - Custom options to add in ovftool. See `ovftool - --help` to list all the options +- `options` (array of strings) - Custom options to add in ovftool. See `ovftool --help` to list all the options diff --git a/website/source/docs/provisioners/ansible-local.html.md b/website/source/docs/provisioners/ansible-local.html.md index 746fb2015..3ca775cf4 100644 --- a/website/source/docs/provisioners/ansible-local.html.md +++ b/website/source/docs/provisioners/ansible-local.html.md @@ -1,11 +1,11 @@ --- +description: | + The ansible-local Packer provisioner configures Ansible to run on the + machine by Packer from local Playbook and Role files. Playbooks and Roles can + be uploaded from your local machine to the remote machine. layout: docs -sidebar_current: docs-provisioners-ansible-local -page_title: Ansible Local - Provisioners -description: |- - The ansible-local Packer provisioner configures Ansible to run on the - machine by Packer from local Playbook and Role files. Playbooks and Roles can - be uploaded from your local machine to the remote machine. +page_title: 'Ansible Local - Provisioners' +sidebar_current: 'docs-provisioners-ansible-local' --- # Ansible Local Provisioner @@ -18,7 +18,7 @@ uploaded from your local machine to the remote machine. Ansible is run in [local mode](https://docs.ansible.com/ansible/playbooks_delegation.html#local-playbooks) via the `ansible-playbook` command. --> **Note:** Ansible will *not* be installed automatically by this +-> **Note:** Ansible will *not* be installed automatically by this provisioner. This provisioner expects that Ansible is already installed on the machine. It is common practice to use the [shell provisioner](/docs/provisioners/shell.html) before the Ansible provisioner to do @@ -28,7 +28,7 @@ this. The example below is fully functional. -```json +``` json { "type": "ansible-local", "playbook_file": "local.yml" @@ -41,38 +41,37 @@ The reference of available configuration options is listed below. Required: -- `playbook_file` (string) - The playbook file to be executed by ansible. This +- `playbook_file` (string) - The playbook file to be executed by ansible. This file must exist on your local system and will be uploaded to the remote machine. Optional: -- `command` (string) - The command to invoke ansible. Defaults - to "ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ansible-playbook". +- `command` (string) - The command to invoke ansible. Defaults + to "ANSIBLE\_FORCE\_COLOR=1 PYTHONUNBUFFERED=1 ansible-playbook". Note, This disregards the value of `-color` when passed to `packer build`. To disable colors, set this to `PYTHONUNBUFFERED=1 ansible-playbook`. -- `extra_arguments` (array of strings) - An array of extra arguments to pass - to the ansible command. By default, this is empty. These arguments _will_ +- `extra_arguments` (array of strings) - An array of extra arguments to pass + to the ansible command. By default, this is empty. These arguments *will* be passed through a shell and arguments should be quoted accordingly. Usage example: -``` -"extra_arguments": [ "--extra-vars \"Region={{user `Region`}} Stage={{user `Stage`}}\"" ] -``` + + "extra_arguments": [ "--extra-vars \"Region={{user `Region`}} Stage={{user `Stage`}}\"" ] -- `inventory_groups` (string) - A comma-separated list of groups to which +- `inventory_groups` (string) - A comma-separated list of groups to which packer will assign the host `127.0.0.1`. A value of `my_group_1,my_group_2` will generate an Ansible inventory like: -```text +``` text [my_group_1] 127.0.0.1 [my_group_2] 127.0.0.1 ``` -- `inventory_file` (string) - The inventory file to be used by ansible. This +- `inventory_file` (string) - The inventory file to be used by ansible. This file must exist on your local system and will be uploaded to the remote machine. @@ -82,7 +81,7 @@ specified host you're buiding. The `--limit` argument can be provided in the An example inventory file may look like: -```text +``` text [chi-dbservers] db-01 ansible_connection=local db-02 ansible_connection=local @@ -102,32 +101,32 @@ chi-dbservers chi-appservers ``` -- `playbook_dir` (string) - a path to the complete ansible directory structure +- `playbook_dir` (string) - a path to the complete ansible directory structure on your local system to be copied to the remote machine as the `staging_directory` before all other files and directories. -- `playbook_paths` (array of strings) - An array of directories of playbook files on +- `playbook_paths` (array of strings) - An array of directories of playbook files on your local system. These will be uploaded to the remote machine under `staging_directory`/playbooks. By default, this is empty. -- `galaxy_file` (string) - A requirements file which provides a way to install +- `galaxy_file` (string) - A requirements file which provides a way to install roles with the [ansible-galaxy cli](http://docs.ansible.com/ansible/galaxy.html#the-ansible-galaxy-command-line-tool) on the remote machine. By default, this is empty. -- `group_vars` (string) - a path to the directory containing ansible group +- `group_vars` (string) - a path to the directory containing ansible group variables on your local system to be copied to the remote machine. By default, this is empty. -- `host_vars` (string) - a path to the directory containing ansible host +- `host_vars` (string) - a path to the directory containing ansible host variables on your local system to be copied to the remote machine. By default, this is empty. -- `role_paths` (array of strings) - An array of paths to role directories on +- `role_paths` (array of strings) - An array of paths to role directories on your local system. These will be uploaded to the remote machine under `staging_directory`/roles. By default, this is empty. -- `staging_directory` (string) - The directory where all the configuration of +- `staging_directory` (string) - The directory where all the configuration of Ansible by Packer will be placed. By default this is `/tmp/packer-provisioner-ansible-local/`, where `` is replaced with a unique ID so that this provisioner can be run more than once. If @@ -144,15 +143,15 @@ In addition to being able to specify extra arguments using the `extra_arguments` configuration, the provisioner automatically defines certain commonly useful Ansible variables: -- `packer_build_name` is set to the name of the build that Packer is running. +- `packer_build_name` is set to the name of the build that Packer is running. This is most useful when Packer is making multiple builds and you want to distinguish them slightly when using a common playbook. -- `packer_builder_type` is the type of the builder that was used to create the +- `packer_builder_type` is the type of the builder that was used to create the machine that the script is running on. This is useful if you want to run only certain parts of the playbook on systems built with certain builders. -- `packer_http_addr` If using a builder that provides an http server for file +- `packer_http_addr` If using a builder that provides an http server for file transfer (such as hyperv, parallels, qemu, virtualbox, and vmware), this will be set to the address. You can use this address in your provisioner to download large files over http. This may be useful if you're experiencing diff --git a/website/source/docs/provisioners/ansible.html.md b/website/source/docs/provisioners/ansible.html.md index 0a4cdf243..61719509e 100644 --- a/website/source/docs/provisioners/ansible.html.md +++ b/website/source/docs/provisioners/ansible.html.md @@ -1,10 +1,10 @@ --- +description: | + The ansible Packer provisioner allows Ansible playbooks to be run to + provision the machine. layout: docs -sidebar_current: docs-provisioners-ansible-remote -page_title: Ansible - Provisioners -description: |- - The ansible Packer provisioner allows Ansible playbooks to be run to - provision the machine. +page_title: 'Ansible - Provisioners' +sidebar_current: 'docs-provisioners-ansible-remote' --- # Ansible Provisioner @@ -23,7 +23,7 @@ given in the json config. This is a fully functional template that will provision an image on DigitalOcean. Replace the mock `api_token` value with your own. -```json +``` json { "provisioners": [ { @@ -47,80 +47,80 @@ DigitalOcean. Replace the mock `api_token` value with your own. Required Parameters: -- `playbook_file` - The playbook to be run by Ansible. +- `playbook_file` - The playbook to be run by Ansible. Optional Parameters: -- `ansible_env_vars` (array of strings) - Environment variables to set before - running Ansible. - Usage example: +- `ansible_env_vars` (array of strings) - Environment variables to set before + running Ansible. + Usage example: - ```json + ``` json { "ansible_env_vars": [ "ANSIBLE_HOST_KEY_CHECKING=False", "ANSIBLE_SSH_ARGS='-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s'", "ANSIBLE_NOCOLOR=True" ] } ``` -- `command` (string) - The command to invoke ansible. - Defaults to `ansible-playbook`. +- `command` (string) - The command to invoke ansible. + Defaults to `ansible-playbook`. -- `empty_groups` (array of strings) - The groups which should be present in - inventory file but remain empty. +- `empty_groups` (array of strings) - The groups which should be present in + inventory file but remain empty. -- `extra_arguments` (array of strings) - Extra arguments to pass to Ansible. - These arguments _will not_ be passed through a shell and arguments should - not be quoted. Usage example: +- `extra_arguments` (array of strings) - Extra arguments to pass to Ansible. + These arguments *will not* be passed through a shell and arguments should + not be quoted. Usage example: - ```json + ``` json { "extra_arguments": [ "--extra-vars", "Region={{user `Region`}} Stage={{user `Stage`}}" ] } ``` -- `groups` (array of strings) - The groups into which the Ansible host - should be placed. When unspecified, the host is not associated with any - groups. +- `groups` (array of strings) - The groups into which the Ansible host + should be placed. When unspecified, the host is not associated with any + groups. -- `host_alias` (string) - The alias by which the Ansible host should be known. - Defaults to `default`. +- `host_alias` (string) - The alias by which the Ansible host should be known. + Defaults to `default`. -- `inventory_directory` (string) - The directory in which to place the - temporary generated Ansible inventory file. By default, this is the - system-specific temporary file location. The fully-qualified name of this - temporary file will be passed to the `-i` argument of the `ansible` command - when this provisioner runs ansible. Specify this if you have an existing - inventory directory with `host_vars` `group_vars` that you would like to use - in the playbook that this provisioner will run. +- `inventory_directory` (string) - The directory in which to place the + temporary generated Ansible inventory file. By default, this is the + system-specific temporary file location. The fully-qualified name of this + temporary file will be passed to the `-i` argument of the `ansible` command + when this provisioner runs ansible. Specify this if you have an existing + inventory directory with `host_vars` `group_vars` that you would like to use + in the playbook that this provisioner will run. -- `local_port` (string) - The port on which to attempt to listen for SSH - connections. This value is a starting point. The provisioner will attempt - listen for SSH connections on the first available of ten ports, starting at - `local_port`. A system-chosen port is used when `local_port` is missing or - empty. +- `local_port` (string) - The port on which to attempt to listen for SSH + connections. This value is a starting point. The provisioner will attempt + listen for SSH connections on the first available of ten ports, starting at + `local_port`. A system-chosen port is used when `local_port` is missing or + empty. -- `sftp_command` (string) - The command to run on the machine being provisioned - by Packer to handle the SFTP protocol that Ansible will use to transfer - files. The command should read and write on stdin and stdout, respectively. - Defaults to `/usr/lib/sftp-server -e`. +- `sftp_command` (string) - The command to run on the machine being provisioned + by Packer to handle the SFTP protocol that Ansible will use to transfer + files. The command should read and write on stdin and stdout, respectively. + Defaults to `/usr/lib/sftp-server -e`. -- `skip_version_check` (bool) - Check if ansible is installed prior to running. - Set this to `true`, for example, if you're going to install ansible during - the packer run. +- `skip_version_check` (bool) - Check if ansible is installed prior to running. + Set this to `true`, for example, if you're going to install ansible during + the packer run. -- `ssh_host_key_file` (string) - The SSH key that will be used to run the SSH - server on the host machine to forward commands to the target machine. Ansible - connects to this server and will validate the identity of the server using - the system known_hosts. The default behavior is to generate and use a - onetime key. Host key checking is disabled via the - `ANSIBLE_HOST_KEY_CHECKING` environment variable if the key is generated. +- `ssh_host_key_file` (string) - The SSH key that will be used to run the SSH + server on the host machine to forward commands to the target machine. Ansible + connects to this server and will validate the identity of the server using + the system known\_hosts. The default behavior is to generate and use a + onetime key. Host key checking is disabled via the + `ANSIBLE_HOST_KEY_CHECKING` environment variable if the key is generated. -- `ssh_authorized_key_file` (string) - The SSH public key of the Ansible - `ssh_user`. The default behavior is to generate and use a onetime key. If - this key is generated, the corresponding private key is passed to - `ansible-playbook` with the `--private-key` option. +- `ssh_authorized_key_file` (string) - The SSH public key of the Ansible + `ssh_user`. The default behavior is to generate and use a onetime key. If + this key is generated, the corresponding private key is passed to + `ansible-playbook` with the `--private-key` option. -- `user` (string) - The `ansible_user` to use. Defaults to the user running - packer. +- `user` (string) - The `ansible_user` to use. Defaults to the user running + packer. ## Default Extra Variables @@ -128,11 +128,11 @@ In addition to being able to specify extra arguments using the `extra_arguments` configuration, the provisioner automatically defines certain commonly useful Ansible variables: -- `packer_build_name` is set to the name of the build that Packer is running. +- `packer_build_name` is set to the name of the build that Packer is running. This is most useful when Packer is making multiple builds and you want to distinguish them slightly when using a common playbook. -- `packer_builder_type` is the type of the builder that was used to create the +- `packer_builder_type` is the type of the builder that was used to create the machine that the script is running on. This is useful if you want to run only certain parts of the playbook on systems built with certain builders. @@ -142,7 +142,7 @@ commonly useful Ansible variables: Redhat / CentOS builds have been known to fail with the following error due to `sftp_command`, which should be set to `/usr/libexec/openssh/sftp-server -e`: -```text +``` text ==> virtualbox-ovf: starting sftp subsystem virtualbox-ovf: fatal: [default]: UNREACHABLE! => {"changed": false, "msg": "SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh", "unreachable": true} ``` @@ -151,7 +151,7 @@ Redhat / CentOS builds have been known to fail with the following error due to ` Building within a chroot (e.g. `amazon-chroot`) requires changing the Ansible connection to chroot. -```json +``` json { "builders": [ { @@ -178,7 +178,7 @@ Building within a chroot (e.g. `amazon-chroot`) requires changing the Ansible co Windows builds require a custom Ansible connection plugin and a particular configuration. Assuming a directory named `connection_plugins` is next to the playbook and contains a file named `packer.py` whose contents is -```python +``` python from __future__ import (absolute_import, division, print_function) __metaclass__ = type @@ -199,7 +199,7 @@ class Connection(SSHConnection): This template should build a Windows Server 2012 image on Google Cloud Platform: -```json +``` json { "variables": {}, "provisioners": [ @@ -230,3 +230,4 @@ This template should build a Windows Server 2012 image on Google Cloud Platform: } ] } +``` diff --git a/website/source/docs/provisioners/chef-client.html.md b/website/source/docs/provisioners/chef-client.html.md index 47ae5c917..93ec78725 100644 --- a/website/source/docs/provisioners/chef-client.html.md +++ b/website/source/docs/provisioners/chef-client.html.md @@ -1,11 +1,11 @@ --- +description: | + The chef-client Packer provisioner installs and configures software on + machines built by Packer using chef-client. Packer configures a Chef client to + talk to a remote Chef Server to provision the machine. layout: docs -sidebar_current: docs-provisioners-chef-client -page_title: Chef Client - Provisioners -description: |- - The chef-client Packer provisioner installs and configures software on - machines built by Packer using chef-client. Packer configures a Chef client to - talk to a remote Chef Server to provision the machine. +page_title: 'Chef Client - Provisioners' +sidebar_current: 'docs-provisioners-chef-client' --- # Chef Client Provisioner @@ -25,7 +25,7 @@ installed, using the official Chef installers provided by Chef. The example below is fully functional. It will install Chef onto the remote machine and run Chef client. -```json +``` json { "type": "chef-client", "server_url": "https://mychefserver.com/" @@ -41,86 +41,86 @@ is running must have knife on the path and configured globally, i.e, The reference of available configuration options is listed below. No configuration is actually required. -- `chef_environment` (string) - The name of the chef\_environment sent to the +- `chef_environment` (string) - The name of the chef\_environment sent to the Chef server. By default this is empty and will not use an environment. -- `config_template` (string) - Path to a template that will be used for the +- `config_template` (string) - Path to a template that will be used for the Chef configuration file. By default Packer only sets configuration it needs to match the settings set in the provisioner configuration. If you need to set configurations that the Packer provisioner doesn't support, then you should use a custom configuration template. See the dedicated "Chef Configuration" section below for more details. -- `encrypted_data_bag_secret_path` (string) - The path to the file containing +- `encrypted_data_bag_secret_path` (string) - The path to the file containing the secret for encrypted data bags. By default, this is empty, so no secret will be available. -- `execute_command` (string) - The command used to execute Chef. This has +- `execute_command` (string) - The command used to execute Chef. This has various [configuration template variables](/docs/templates/engine.html) available. See below for more information. -- `guest_os_type` (string) - The target guest OS type, either "unix" or +- `guest_os_type` (string) - The target guest OS type, either "unix" or "windows". Setting this to "windows" will cause the provisioner to use - Windows friendly paths and commands. By default, this is "unix". + Windows friendly paths and commands. By default, this is "unix". -- `install_command` (string) - The command used to install Chef. This has +- `install_command` (string) - The command used to install Chef. This has various [configuration template variables](/docs/templates/engine.html) available. See below for more information. -- `json` (object) - An arbitrary mapping of JSON that will be available as +- `json` (object) - An arbitrary mapping of JSON that will be available as node attributes while running Chef. -- `knife_command` (string) - The command used to run Knife during node clean-up. This has +- `knife_command` (string) - The command used to run Knife during node clean-up. This has various [configuration template variables](/docs/templates/engine.html) available. See below for more information. -- `node_name` (string) - The name of the node to register with the +- `node_name` (string) - The name of the node to register with the Chef Server. This is optional and by default is packer-{{uuid}}. -- `prevent_sudo` (boolean) - By default, the configured commands that are +- `prevent_sudo` (boolean) - By default, the configured commands that are executed to install and run Chef are executed with `sudo`. If this is true, - then the sudo will be omitted. This has no effect when guest_os_type is + then the sudo will be omitted. This has no effect when guest\_os\_type is windows. -- `run_list` (array of strings) - The [run +- `run_list` (array of strings) - The [run list](http://docs.chef.io/essentials_node_object_run_lists.html) for Chef. By default this is empty, and will use the run list sent down by the Chef Server. -- `server_url` (string) - The URL to the Chef server. This is required. +- `server_url` (string) - The URL to the Chef server. This is required. -- `skip_clean_client` (boolean) - If true, Packer won't remove the client from +- `skip_clean_client` (boolean) - If true, Packer won't remove the client from the Chef server after it is done running. By default, this is false. -- `skip_clean_node` (boolean) - If true, Packer won't remove the node from the +- `skip_clean_node` (boolean) - If true, Packer won't remove the node from the Chef server after it is done running. By default, this is false. -- `skip_install` (boolean) - If true, Chef will not automatically be installed +- `skip_install` (boolean) - If true, Chef will not automatically be installed on the machine using the Chef omnibus installers. -- `ssl_verify_mode` (string) - Set to "verify\_none" to skip validation of +- `ssl_verify_mode` (string) - Set to "verify\_none" to skip validation of SSL certificates. If not set, this defaults to "verify\_peer" which validates all SSL certifications. -- `staging_directory` (string) - This is the directory where all the +- `staging_directory` (string) - This is the directory where all the configuration of Chef by Packer will be placed. By default this is - "/tmp/packer-chef-client" when guest_os_type unix and + "/tmp/packer-chef-client" when guest\_os\_type unix and "$env:TEMP/packer-chef-client" when windows. This directory doesn't need to exist but must have proper permissions so that the user that Packer uses is able to create directories and write into this folder. By default the provisioner will create and chmod 0777 this directory. -- `client_key` (string) - Path to client key. If not set, this defaults to a +- `client_key` (string) - Path to client key. If not set, this defaults to a file named client.pem in `staging_directory`. -- `validation_client_name` (string) - Name of the validation client. If not +- `validation_client_name` (string) - Name of the validation client. If not set, this won't be set in the configuration and the default that Chef uses will be used. -- `validation_key_path` (string) - Path to the validation key for +- `validation_key_path` (string) - Path to the validation key for communicating with the Chef Server. This will be uploaded to the remote machine. If this is NOT set, then it is your responsibility via other means (shell provisioner, etc.) to get a validation key to where Chef @@ -135,7 +135,7 @@ template if you'd like to set custom configurations. The default value for the configuration template is: -```liquid +``` liquid log_level :info log_location STDOUT chef_server_url "{{.ServerUrl}}" @@ -164,32 +164,32 @@ This template is a [configuration template](/docs/templates/engine.html) and has a set of variables available to use: -- `ChefEnvironment` - The Chef environment name. -- `EncryptedDataBagSecretPath` - The path to the secret key file to decrypt - encrypted data bags. -- `NodeName` - The node name set in the configuration. -- `ServerUrl` - The URL of the Chef Server set in the configuration. -- `SslVerifyMode` - Whether Chef SSL verify mode is on or off. -- `ValidationClientName` - The name of the client used for validation. -- `ValidationKeyPath` - Path to the validation key, if it is set. +- `ChefEnvironment` - The Chef environment name. +- `EncryptedDataBagSecretPath` - The path to the secret key file to decrypt + encrypted data bags. +- `NodeName` - The node name set in the configuration. +- `ServerUrl` - The URL of the Chef Server set in the configuration. +- `SslVerifyMode` - Whether Chef SSL verify mode is on or off. +- `ValidationClientName` - The name of the client used for validation. +- `ValidationKeyPath` - Path to the validation key, if it is set. ## Execute Command By default, Packer uses the following command (broken across multiple lines for readability) to execute Chef: -```liquid +``` liquid {{if .Sudo}}sudo {{end}}chef-client \ --no-color \ -c {{.ConfigPath}} \ -j {{.JsonPath}} ``` -When guest_os_type is set to "windows", Packer uses the following command to +When guest\_os\_type is set to "windows", Packer uses the following command to execute Chef. The full path to Chef is required because the PATH environment variable changes don't immediately propogate to running processes. -```liquid +``` liquid c:/opscode/chef/bin/chef-client.bat \ --no-color \ -c {{.ConfigPath}} \ @@ -200,9 +200,9 @@ This command can be customized using the `execute_command` configuration. As you can see from the default value above, the value of this configuration can contain various template variables, defined below: -- `ConfigPath` - The path to the Chef configuration file. -- `JsonPath` - The path to the JSON attributes file for the node. -- `Sudo` - A boolean of whether to `sudo` the command or not, depending on the +- `ConfigPath` - The path to the Chef configuration file. +- `JsonPath` - The path to the JSON attributes file for the node. +- `Sudo` - A boolean of whether to `sudo` the command or not, depending on the value of the `prevent_sudo` configuration. ## Install Command @@ -211,15 +211,15 @@ By default, Packer uses the following command (broken across multiple lines for readability) to install Chef. This command can be customized if you want to install Chef in another way. -```text +``` text curl -L https://www.chef.io/chef/install.sh | \ {{if .Sudo}}sudo{{end}} bash ``` -When guest_os_type is set to "windows", Packer uses the following command to +When guest\_os\_type is set to "windows", Packer uses the following command to install the latest version of Chef: -```text +``` text powershell.exe -Command "(New-Object System.Net.WebClient).DownloadFile('http://chef.io/chef/install.msi', 'C:\\Windows\\Temp\\chef.msi');Start-Process 'msiexec' -ArgumentList '/qb /i C:\\Windows\\Temp\\chef.msi' -NoNewWindow -Wait" ``` @@ -230,17 +230,17 @@ This command can be customized using the `install_command` configuration. By default, Packer uses the following command (broken across multiple lines for readability) to execute Chef: -```liquid +``` liquid {{if .Sudo}}sudo {{end}}knife \ {{.Args}} \ {{.Flags}} ``` -When guest_os_type is set to "windows", Packer uses the following command to +When guest\_os\_type is set to "windows", Packer uses the following command to execute Chef. The full path to Chef is required because the PATH environment variable changes don't immediately propogate to running processes. -```liquid +``` liquid c:/opscode/chef/bin/knife.bat \ {{.Args}} \ {{.Flags}} @@ -250,9 +250,9 @@ This command can be customized using the `knife_command` configuration. As you can see from the default value above, the value of this configuration can contain various template variables, defined below: -- `Args` - The command arguments that are getting passed to the Knife command. -- `Flags` - The command flags that are getting passed to the Knife command.. -- `Sudo` - A boolean of whether to `sudo` the command or not, depending on the +- `Args` - The command arguments that are getting passed to the Knife command. +- `Flags` - The command flags that are getting passed to the Knife command.. +- `Sudo` - A boolean of whether to `sudo` the command or not, depending on the value of the `prevent_sudo` configuration. ## Folder Permissions @@ -272,19 +272,17 @@ mode, while passing a `run_list` using a variable. **Local environment variables** -``` -# Machines Chef directory -export PACKER_CHEF_DIR=/var/chef-packer -# Comma separated run_list -export PACKER_CHEF_RUN_LIST="recipe[apt],recipe[nginx]" -``` + # Machines Chef directory + export PACKER_CHEF_DIR=/var/chef-packer + # Comma separated run_list + export PACKER_CHEF_RUN_LIST="recipe[apt],recipe[nginx]" **Packer variables** Set the necessary Packer variables using environment variables or provide a [var file](/docs/templates/user-variables.html). -```json +``` json "variables": { "chef_dir": "{{env `PACKER_CHEF_DIR`}}", "chef_run_list": "{{env `PACKER_CHEF_RUN_LIST`}}", @@ -301,7 +299,7 @@ Make sure we have the correct directories and permissions for the `chef-client` provisioner. You will need to bootstrap the Chef run by providing the necessary cookbooks using Berkshelf or some other means. -```json +``` json { "type": "file", "source": "{{user `packer_chef_bootstrap_dir`}}", diff --git a/website/source/docs/provisioners/chef-solo.html.md b/website/source/docs/provisioners/chef-solo.html.md index 7adddf85d..fa5595894 100644 --- a/website/source/docs/provisioners/chef-solo.html.md +++ b/website/source/docs/provisioners/chef-solo.html.md @@ -1,11 +1,11 @@ --- +description: | + The chef-solo Packer provisioner installs and configures software on machines + built by Packer using chef-solo. Cookbooks can be uploaded from your local + machine to the remote machine or remote paths can be used. layout: docs -sidebar_current: docs-provisioners-chef-solo -page_title: Chef Solo - Provisioners -description: |- - The chef-solo Packer provisioner installs and configures software on machines - built by Packer using chef-solo. Cookbooks can be uploaded from your local - machine to the remote machine or remote paths can be used. +page_title: 'Chef Solo - Provisioners' +sidebar_current: 'docs-provisioners-chef-solo' --- # Chef Solo Provisioner @@ -25,7 +25,7 @@ installed, using the official Chef installers provided by Chef Inc. The example below is fully functional and expects cookbooks in the "cookbooks" directory relative to your working directory. -```json +``` json { "type": "chef-solo", "cookbook_paths": ["cookbooks"] @@ -37,80 +37,80 @@ directory relative to your working directory. The reference of available configuration options is listed below. No configuration is actually required, but at least `run_list` is recommended. -- `chef_environment` (string) - The name of the `chef_environment` sent to the +- `chef_environment` (string) - The name of the `chef_environment` sent to the Chef server. By default this is empty and will not use an environment -- `config_template` (string) - Path to a template that will be used for the +- `config_template` (string) - Path to a template that will be used for the Chef configuration file. By default Packer only sets configuration it needs to match the settings set in the provisioner configuration. If you need to set configurations that the Packer provisioner doesn't support, then you should use a custom configuration template. See the dedicated "Chef Configuration" section below for more details. -- `cookbook_paths` (array of strings) - This is an array of paths to +- `cookbook_paths` (array of strings) - This is an array of paths to "cookbooks" directories on your local filesystem. These will be uploaded to the remote machine in the directory specified by the `staging_directory`. By default, this is empty. -- `data_bags_path` (string) - The path to the "data\_bags" directory on your +- `data_bags_path` (string) - The path to the "data\_bags" directory on your local filesystem. These will be uploaded to the remote machine in the directory specified by the `staging_directory`. By default, this is empty. -- `encrypted_data_bag_secret_path` (string) - The path to the file containing +- `encrypted_data_bag_secret_path` (string) - The path to the file containing the secret for encrypted data bags. By default, this is empty, so no secret will be available. -- `environments_path` (string) - The path to the "environments" directory on +- `environments_path` (string) - The path to the "environments" directory on your local filesystem. These will be uploaded to the remote machine in the directory specified by the `staging_directory`. By default, this is empty. -- `execute_command` (string) - The command used to execute Chef. This has +- `execute_command` (string) - The command used to execute Chef. This has various [configuration template variables](/docs/templates/engine.html) available. See below for more information. -- `guest_os_type` (string) - The target guest OS type, either "unix" or +- `guest_os_type` (string) - The target guest OS type, either "unix" or "windows". Setting this to "windows" will cause the provisioner to use - Windows friendly paths and commands. By default, this is "unix". + Windows friendly paths and commands. By default, this is "unix". -- `install_command` (string) - The command used to install Chef. This has +- `install_command` (string) - The command used to install Chef. This has various [configuration template variables](/docs/templates/engine.html) available. See below for more information. -- `json` (object) - An arbitrary mapping of JSON that will be available as +- `json` (object) - An arbitrary mapping of JSON that will be available as node attributes while running Chef. -- `prevent_sudo` (boolean) - By default, the configured commands that are +- `prevent_sudo` (boolean) - By default, the configured commands that are executed to install and run Chef are executed with `sudo`. If this is true, - then the sudo will be omitted. This has no effect when guest_os_type is + then the sudo will be omitted. This has no effect when guest\_os\_type is windows. -- `remote_cookbook_paths` (array of strings) - A list of paths on the remote +- `remote_cookbook_paths` (array of strings) - A list of paths on the remote machine where cookbooks will already exist. These may exist from a previous provisioner or step. If specified, Chef will be configured to look for cookbooks here. By default, this is empty. -- `roles_path` (string) - The path to the "roles" directory on your +- `roles_path` (string) - The path to the "roles" directory on your local filesystem. These will be uploaded to the remote machine in the directory specified by the `staging_directory`. By default, this is empty. -- `run_list` (array of strings) - The [run +- `run_list` (array of strings) - The [run list](https://docs.chef.io/run_lists.html) for Chef. By default this is empty. -- `skip_install` (boolean) - If true, Chef will not automatically be installed +- `skip_install` (boolean) - If true, Chef will not automatically be installed on the machine using the Chef omnibus installers. -- `staging_directory` (string) - This is the directory where all the +- `staging_directory` (string) - This is the directory where all the configuration of Chef by Packer will be placed. By default this is - "/tmp/packer-chef-solo" when guest_os_type unix and + "/tmp/packer-chef-solo" when guest\_os\_type unix and "$env:TEMP/packer-chef-solo" when windows. This directory doesn't need to exist but must have proper permissions so that the user that Packer uses is able to create directories and write into this folder. If the permissions are not correct, use a shell provisioner prior to this to configure it properly. -- `version` (string) - The version of Chef to be installed. By default this is +- `version` (string) - The version of Chef to be installed. By default this is empty which will install the latest version of Chef. ## Chef Configuration @@ -122,7 +122,7 @@ template if you'd like to set custom configurations. The default value for the configuration template is: -```liquid +``` liquid cookbook_path [{{.CookbookPaths}}] ``` @@ -130,32 +130,32 @@ This template is a [configuration template](/docs/templates/engine.html) and has a set of variables available to use: -- `ChefEnvironment` - The current enabled environment. Only non-empty if the +- `ChefEnvironment` - The current enabled environment. Only non-empty if the environment path is set. -- `CookbookPaths` is the set of cookbook paths ready to embedded directly into +- `CookbookPaths` is the set of cookbook paths ready to embedded directly into a Ruby array to configure Chef. -- `DataBagsPath` is the path to the data bags folder. -- `EncryptedDataBagSecretPath` - The path to the encrypted data bag secret -- `EnvironmentsPath` - The path to the environments folder. -- `RolesPath` - The path to the roles folder. +- `DataBagsPath` is the path to the data bags folder. +- `EncryptedDataBagSecretPath` - The path to the encrypted data bag secret +- `EnvironmentsPath` - The path to the environments folder. +- `RolesPath` - The path to the roles folder. ## Execute Command By default, Packer uses the following command (broken across multiple lines for readability) to execute Chef: -```liquid +``` liquid {{if .Sudo}}sudo {{end}}chef-solo \ --no-color \ -c {{.ConfigPath}} \ -j {{.JsonPath}} ``` -When guest_os_type is set to "windows", Packer uses the following command to +When guest\_os\_type is set to "windows", Packer uses the following command to execute Chef. The full path to Chef is required because the PATH environment variable changes don't immediately propogate to running processes. -```liquid +``` liquid c:/opscode/chef/bin/chef-solo.bat \ --no-color \ -c {{.ConfigPath}} \ @@ -166,9 +166,9 @@ This command can be customized using the `execute_command` configuration. As you can see from the default value above, the value of this configuration can contain various template variables, defined below: -- `ConfigPath` - The path to the Chef configuration file. -- `JsonPath` - The path to the JSON attributes file for the node. -- `Sudo` - A boolean of whether to `sudo` the command or not, depending on the +- `ConfigPath` - The path to the Chef configuration file. +- `JsonPath` - The path to the JSON attributes file for the node. +- `Sudo` - A boolean of whether to `sudo` the command or not, depending on the value of the `prevent_sudo` configuration. ## Install Command @@ -177,15 +177,15 @@ By default, Packer uses the following command (broken across multiple lines for readability) to install Chef. This command can be customized if you want to install Chef in another way. -```text +``` text curl -L https://omnitruck.chef.io/install.sh | \ {{if .Sudo}}sudo{{end}} bash -s --{{if .Version}} -v {{.Version}}{{end}} ``` -When guest_os_type is set to "windows", Packer uses the following command to +When guest\_os\_type is set to "windows", Packer uses the following command to install the latest version of Chef: -```text +``` text powershell.exe -Command \". { iwr -useb https://omnitruck.chef.io/install.ps1 } | iex; install\" ``` diff --git a/website/source/docs/provisioners/converge.html.md b/website/source/docs/provisioners/converge.html.md index 8bc85b133..0673f2f32 100644 --- a/website/source/docs/provisioners/converge.html.md +++ b/website/source/docs/provisioners/converge.html.md @@ -1,10 +1,10 @@ --- +description: | + The converge Packer provisioner uses Converge modules to provision the + machine. layout: docs -sidebar_current: docs-provisioners-converge -page_title: Converge - Provisioners -description: |- - The converge Packer provisioner uses Converge modules to provision the - machine. +page_title: 'Converge - Provisioners' +sidebar_current: 'docs-provisioners-converge' --- # Converge Provisioner @@ -22,7 +22,7 @@ new images. The example below is fully functional. -```json +``` json { "type": "converge", "module": "https://raw.githubusercontent.com/asteris-llc/converge/master/samples/fileContent.hcl", @@ -37,37 +37,37 @@ The example below is fully functional. The reference of available configuration options is listed below. The only required element is "module". Every other option is optional. -- `module` (string) - Path (or URL) to the root module that Converge will apply. +- `module` (string) - Path (or URL) to the root module that Converge will apply. Optional parameters: -- `bootstrap` (boolean, defaults to false) - Set to allow the provisioner to - download the latest Converge bootstrap script and the specified `version` of - Converge from the internet. +- `bootstrap` (boolean, defaults to false) - Set to allow the provisioner to + download the latest Converge bootstrap script and the specified `version` of + Converge from the internet. -- `version` (string) - Set to a [released Converge version](https://github.com/asteris-llc/converge/releases) for bootstrap. +- `version` (string) - Set to a [released Converge version](https://github.com/asteris-llc/converge/releases) for bootstrap. -- `module_dirs` (array of directory specifications) - Module directories to - transfer to the remote host for execution. See below for the specification. +- `module_dirs` (array of directory specifications) - Module directories to + transfer to the remote host for execution. See below for the specification. -- `working_directory` (string) - The directory that Converge will change to - before execution. +- `working_directory` (string) - The directory that Converge will change to + before execution. -- `params` (maps of string to string) - parameters to pass into the root module. +- `params` (maps of string to string) - parameters to pass into the root module. -- `execute_command` (string) - the command used to execute Converge. This has - various - [configuration template variables](/docs/templates/engine.html) available. +- `execute_command` (string) - the command used to execute Converge. This has + various + [configuration template variables](/docs/templates/engine.html) available. -- `prevent_sudo` (bool) - stop Converge from running with adminstrator - privileges via sudo +- `prevent_sudo` (bool) - stop Converge from running with adminstrator + privileges via sudo -- `bootstrap_command` (string) - the command used to bootstrap Converge. This - has various - [configuration template variables](/docs/templates/engine.html) available. +- `bootstrap_command` (string) - the command used to bootstrap Converge. This + has various + [configuration template variables](/docs/templates/engine.html) available. -- `prevent_bootstrap_sudo` (bool) - stop Converge from bootstrapping with - administrator privileges via sudo +- `prevent_bootstrap_sudo` (bool) - stop Converge from bootstrapping with + administrator privileges via sudo ### Module Directories @@ -75,18 +75,18 @@ The provisioner can transfer module directories to the remote host for provisioning. Of these fields, `source` and `destination` are required in every directory. -- `source` (string) - the path to the folder on the local machine. +- `source` (string) - the path to the folder on the local machine. -- `destination` (string) - the path to the folder on the remote machine. Parent - directories will not be created; use the shell module to do this. +- `destination` (string) - the path to the folder on the remote machine. Parent + directories will not be created; use the shell module to do this. -- `exclude` (array of string) - files and directories to exclude from transfer. +- `exclude` (array of string) - files and directories to exclude from transfer. ### Execute Command By default, Packer uses the following command (broken across multiple lines for readability) to execute Converge: -```liquid +``` liquid cd {{.WorkingDirectory}} && \ {{if .Sudo}}sudo {{end}}converge apply \ --local \ @@ -99,16 +99,16 @@ This command can be customized using the `execute_command` configuration. As you can see from the default value above, the value of this configuration can contain various template variables: -- `WorkingDirectory` - `directory` from the configuration. -- `Sudo` - the opposite of `prevent_sudo` from the configuration. -- `ParamsJSON` - The unquoted JSONified form of `params` from the configuration. -- `Module` - `module` from the configuration. +- `WorkingDirectory` - `directory` from the configuration. +- `Sudo` - the opposite of `prevent_sudo` from the configuration. +- `ParamsJSON` - The unquoted JSONified form of `params` from the configuration. +- `Module` - `module` from the configuration. ### Bootstrap Command By default, Packer uses the following command to bootstrap Converge: -```liquid +``` liquid curl -s https://get.converge.sh | {{if .Sudo}}sudo {{end}}sh {{if ne .Version ""}}-s -- -v {{.Version}}{{end}} ``` @@ -116,5 +116,5 @@ This command can be customized using the `bootstrap_command` configuration. As y can see from the default values above, the value of this configuration can contain various template variables: -- `Sudo` - the opposite of `prevent_bootstrap_sudo` from the configuration. -- `Version` - `version` from the configuration. +- `Sudo` - the opposite of `prevent_bootstrap_sudo` from the configuration. +- `Version` - `version` from the configuration. diff --git a/website/source/docs/provisioners/custom.html.md b/website/source/docs/provisioners/custom.html.md index bb8dddb4f..d3c227d42 100644 --- a/website/source/docs/provisioners/custom.html.md +++ b/website/source/docs/provisioners/custom.html.md @@ -1,12 +1,12 @@ --- +description: | + Packer is extensible, allowing you to write new provisioners without having to + modify the core source code of Packer itself. Documentation for creating new + provisioners is covered in the custom provisioners page of the Packer plugin + section. layout: docs -sidebar_current: docs-provisioners-custom -page_title: Custom - Provisioners -description: |- - Packer is extensible, allowing you to write new provisioners without having to - modify the core source code of Packer itself. Documentation for creating new - provisioners is covered in the custom provisioners page of the Packer plugin - section. +page_title: 'Custom - Provisioners' +sidebar_current: 'docs-provisioners-custom' --- # Custom Provisioner diff --git a/website/source/docs/provisioners/file.html.md b/website/source/docs/provisioners/file.html.md index 336324dbe..d00927fae 100644 --- a/website/source/docs/provisioners/file.html.md +++ b/website/source/docs/provisioners/file.html.md @@ -1,12 +1,12 @@ --- +description: | + The file Packer provisioner uploads files to machines built by Packer. The + recommended usage of the file provisioner is to use it to upload files, and + then use shell provisioner to move them to the proper place, set permissions, + etc. layout: docs -sidebar_current: docs-provisioners-file -page_title: File - Provisioners -description: |- - The file Packer provisioner uploads files to machines built by Packer. The - recommended usage of the file provisioner is to use it to upload files, and - then use shell provisioner to move them to the proper place, set permissions, - etc. +page_title: 'File - Provisioners' +sidebar_current: 'docs-provisioners-file' --- # File Provisioner @@ -22,7 +22,7 @@ The file provisioner can upload both single files and complete directories. ## Basic Example -```json +``` json { "type": "file", "source": "app.tar.gz", @@ -34,17 +34,17 @@ The file provisioner can upload both single files and complete directories. The available configuration options are listed below. All elements are required. -- `source` (string) - The path to a local file or directory to upload to +- `source` (string) - The path to a local file or directory to upload to the machine. The path can be absolute or relative. If it is relative, it is relative to the working directory when Packer is executed. If this is a directory, the existence of a trailing slash is important. Read below on uploading directories. -- `destination` (string) - The path where the file will be uploaded to in +- `destination` (string) - The path where the file will be uploaded to in the machine. This value must be a writable location and any parent directories must already exist. -- `direction` (string) - The direction of the file transfer. This defaults to +- `direction` (string) - The direction of the file transfer. This defaults to "upload." If it is set to "download" then the file "source" in the machine will be downloaded locally to "destination" @@ -58,7 +58,7 @@ First, the destination directory must already exist. If you need to create it, use a shell provisioner just prior to the file provisioner in order to create the directory. If the destination directory does not exist, the file provisioner may succeed, but it will have undefined results. Note that the -`docker` builder does not have this requirement. It will create any needed +`docker` builder does not have this requirement. It will create any needed destination directories, but it's generally best practice to not rely on this behavior. @@ -86,7 +86,7 @@ treat local symlinks as regular files. If you wish to preserve symlinks when uploading, it's recommended that you use `tar`. Below is an example of what that might look like: -```text +``` text $ ls -l files total 16 drwxr-xr-x 3 mwhooker staff 102 Jan 27 17:10 a @@ -95,7 +95,7 @@ lrwxr-xr-x 1 mwhooker staff 1 Jan 27 17:10 b -> a lrwxr-xr-x 1 mwhooker staff 5 Jan 27 17:10 file1link -> file1 ``` -```json +``` json { "provisioners": [ { diff --git a/website/source/docs/provisioners/index.html.md b/website/source/docs/provisioners/index.html.md index 98d94b430..3d88ef327 100644 --- a/website/source/docs/provisioners/index.html.md +++ b/website/source/docs/provisioners/index.html.md @@ -1,10 +1,10 @@ --- +description: | + Provisioners use builtin and third-party software to install and configure the + machine image after booting. layout: docs -sidebar_current: docs-provisioners page_title: Provisioners -description: |- - Provisioners use builtin and third-party software to install and configure the - machine image after booting. +sidebar_current: 'docs-provisioners' --- # Provisioners @@ -13,10 +13,10 @@ Provisioners use builtin and third-party software to install and configure the machine image after booting. Provisioners prepare the system for use, so common use cases for provisioners include: -- installing packages -- patching the kernel -- creating users -- downloading application code +- installing packages +- patching the kernel +- creating users +- downloading application code These are just a few examples, and the possibilities for provisioners are endless. diff --git a/website/source/docs/provisioners/powershell.html.md b/website/source/docs/provisioners/powershell.html.md index baf5799bc..87a9a76f6 100644 --- a/website/source/docs/provisioners/powershell.html.md +++ b/website/source/docs/provisioners/powershell.html.md @@ -1,11 +1,11 @@ --- +description: | + The shell Packer provisioner provisions machines built by Packer using shell + scripts. Shell provisioning is the easiest way to get software installed and + configured on a machine. layout: docs -sidebar_current: docs-provisioners-powershell -page_title: PowerShell - Provisioners -description: |- - The shell Packer provisioner provisions machines built by Packer using shell - scripts. Shell provisioning is the easiest way to get software installed and - configured on a machine. +page_title: 'PowerShell - Provisioners' +sidebar_current: 'docs-provisioners-powershell' --- # PowerShell Provisioner @@ -19,7 +19,7 @@ It assumes that the communicator in use is WinRM. The example below is fully functional. -```json +``` json { "type": "powershell", "inline": ["dir c:\\"] @@ -33,73 +33,72 @@ required element is either "inline" or "script". Every other option is optional. Exactly *one* of the following is required: -- `inline` (array of strings) - This is an array of commands to execute. The +- `inline` (array of strings) - This is an array of commands to execute. The commands are concatenated by newlines and turned into a single file, so they are all executed within the same context. This allows you to change directories in one command and use something in the directory in the next and so on. Inline scripts are the easiest way to pull off simple tasks within the machine. -- `script` (string) - The path to a script to upload and execute in +- `script` (string) - The path to a script to upload and execute in the machine. This path can be absolute or relative. If it is relative, it is relative to the working directory when Packer is executed. -- `scripts` (array of strings) - An array of scripts to execute. The scripts +- `scripts` (array of strings) - An array of scripts to execute. The scripts will be uploaded and executed in the order specified. Each script is executed in isolation, so state such as variables from one script won't carry on to the next. Optional parameters: -- `binary` (boolean) - If true, specifies that the script(s) are binary files, +- `binary` (boolean) - If true, specifies that the script(s) are binary files, and Packer should therefore not convert Windows line endings to Unix line endings (if there are any). By default this is false. -- `environment_vars` (array of strings) - An array of key/value pairs to +- `environment_vars` (array of strings) - An array of key/value pairs to inject prior to the execute\_command. The format should be `key=value`. Packer injects some environmental variables by default into the environment, as well, which are covered in the section below. -- `execute_command` (string) - The command to use to execute the script. By +- `execute_command` (string) - The command to use to execute the script. By default this is `powershell "& { {{.Vars}}{{.Path}}; exit $LastExitCode}"`. The value of this is treated as [configuration template](/docs/templates/engine.html). There are two available variables: `Path`, which is the path to the script to run, and `Vars`, which is the list of `environment_vars`, if configured. -- `elevated_user` and `elevated_password` (string) - If specified, the +- `elevated_user` and `elevated_password` (string) - If specified, the PowerShell script will be run with elevated privileges using the given Windows user. -- `remote_path` (string) - The path where the script will be uploaded to in +- `remote_path` (string) - The path where the script will be uploaded to in the machine. This defaults to "c:/Windows/Temp/script.ps1". This value must be a writable location and any parent directories must already exist. -- `start_retry_timeout` (string) - The amount of time to attempt to *start* +- `start_retry_timeout` (string) - The amount of time to attempt to *start* the remote process. By default this is "5m" or 5 minutes. This setting exists in order to deal with times when SSH may restart, such as a system reboot. Set this to a higher value if reboots take a longer amount of time. -- `valid_exit_codes` (list of ints) - Valid exit codes for the script. By +- `valid_exit_codes` (list of ints) - Valid exit codes for the script. By default this is just 0. - ## Default Environmental Variables In addition to being able to specify custom environmental variables using the `environment_vars` configuration, the provisioner automatically defines certain commonly useful environmental variables: -- `PACKER_BUILD_NAME` is set to the name of the build that Packer is running. +- `PACKER_BUILD_NAME` is set to the name of the build that Packer is running. This is most useful when Packer is making multiple builds and you want to distinguish them slightly from a common provisioning script. -- `PACKER_BUILDER_TYPE` is the type of the builder that was used to create the +- `PACKER_BUILDER_TYPE` is the type of the builder that was used to create the machine that the script is running on. This is useful if you want to run only certain parts of the script on systems built with certain builders. -- `PACKER_HTTP_ADDR` If using a builder that provides an http server for file +- `PACKER_HTTP_ADDR` If using a builder that provides an http server for file transfer (such as hyperv, parallels, qemu, virtualbox, and vmware), this will be set to the address. You can use this address in your provisioner to download large files over http. This may be useful if you're experiencing diff --git a/website/source/docs/provisioners/puppet-masterless.html.md b/website/source/docs/provisioners/puppet-masterless.html.md index b8e71df1a..c630f8412 100644 --- a/website/source/docs/provisioners/puppet-masterless.html.md +++ b/website/source/docs/provisioners/puppet-masterless.html.md @@ -1,13 +1,13 @@ --- +description: | + The masterless Puppet Packer provisioner configures Puppet to run on the + machines by Packer from local modules and manifest files. Modules and + manifests can be uploaded from your local machine to the remote machine or can + simply use remote paths. Puppet is run in masterless mode, meaning it never + communicates to a Puppet master. layout: docs -sidebar_current: docs-provisioners-puppet-masterless -page_title: Puppet Masterless - Provisioners -description: |- - The masterless Puppet Packer provisioner configures Puppet to run on the - machines by Packer from local modules and manifest files. Modules and - manifests can be uploaded from your local machine to the remote machine or can - simply use remote paths. Puppet is run in masterless mode, meaning it never - communicates to a Puppet master. +page_title: 'Puppet Masterless - Provisioners' +sidebar_current: 'docs-provisioners-puppet-masterless' --- # Puppet (Masterless) Provisioner @@ -21,7 +21,7 @@ remote paths (perhaps obtained using something like the shell provisioner). Puppet is run in masterless mode, meaning it never communicates to a Puppet master. --> **Note:** Puppet will *not* be installed automatically by this +-> **Note:** Puppet will *not* be installed automatically by this provisioner. This provisioner expects that Puppet is already installed on the machine. It is common practice to use the [shell provisioner](/docs/provisioners/shell.html) before the Puppet provisioner to do @@ -32,7 +32,7 @@ this. The example below is fully functional and expects the configured manifest file to exist relative to your working directory. -```json +``` json { "type": "puppet-masterless", "manifest_file": "site.pp" @@ -45,7 +45,7 @@ The reference of available configuration options is listed below. Required parameters: -- `manifest_file` (string) - This is either a path to a puppet manifest +- `manifest_file` (string) - This is either a path to a puppet manifest (`.pp` file) *or* a directory containing multiple manifests that puppet will apply (the ["main manifest"](https://docs.puppetlabs.com/puppet/latest/reference/dirs_manifest.html)). @@ -54,59 +54,59 @@ Required parameters: Optional parameters: -- `execute_command` (string) - The command used to execute Puppet. This has +- `execute_command` (string) - The command used to execute Puppet. This has various [configuration template variables](/docs/templates/engine.html) available. See below for more information. -- `extra_arguments` (array of strings) - This is an array of additional options to +- `extra_arguments` (array of strings) - This is an array of additional options to pass to the puppet command when executing puppet. This allows for customization of the `execute_command` without having to completely replace or include it's contents, making forward-compatible customizations much easier. -- `facter` (object of key/value strings) - Additional +- `facter` (object of key/value strings) - Additional [facts](https://puppetlabs.com/facter) to make available when Puppet is running. -- `hiera_config_path` (string) - The path to a local file with hiera +- `hiera_config_path` (string) - The path to a local file with hiera configuration to be uploaded to the remote machine. Hiera data directories must be uploaded using the file provisioner separately. -- `ignore_exit_codes` (boolean) - If true, Packer will never consider the +- `ignore_exit_codes` (boolean) - If true, Packer will never consider the provisioner a failure. -- `manifest_dir` (string) - The path to a local directory with manifests to be +- `manifest_dir` (string) - The path to a local directory with manifests to be uploaded to the remote machine. This is useful if your main manifest file uses imports. This directory doesn't necessarily contain the `manifest_file`. It is a separate directory that will be set as the "manifestdir" setting on Puppet. -~> `manifest_dir` is passed to `puppet apply` as the `--manifestdir` option. +~> `manifest_dir` is passed to `puppet apply` as the `--manifestdir` option. This option was deprecated in puppet 3.6, and removed in puppet 4.0. If you have multiple manifests you should use `manifest_file` instead. -- `puppet_bin_dir` (string) - The path to the directory that contains the puppet +- `puppet_bin_dir` (string) - The path to the directory that contains the puppet binary for running `puppet apply`. Usually, this would be found via the `$PATH` or `%PATH%` environment variable, but some builders (notably, the Docker one) do not run profile-setup scripts, therefore the path is usually empty. -- `module_paths` (array of strings) - This is an array of paths to module +- `module_paths` (array of strings) - This is an array of paths to module directories on your local filesystem. These will be uploaded to the remote machine. By default, this is empty. -- `prevent_sudo` (boolean) - By default, the configured commands that are +- `prevent_sudo` (boolean) - By default, the configured commands that are executed to run Puppet are executed with `sudo`. If this is true, then the sudo will be omitted. -- `staging_directory` (string) - This is the directory where all the +- `staging_directory` (string) - This is the directory where all the configuration of Puppet by Packer will be placed. By default this is "/tmp/packer-puppet-masterless". This directory doesn't need to exist but must have proper permissions so that the SSH user that Packer uses is able to create directories and write into this folder. If the permissions are not correct, use a shell provisioner prior to this to configure it properly. -- `working_directory` (string) - This is the directory from which the puppet +- `working_directory` (string) - This is the directory from which the puppet command will be run. When using hiera with a relative path, this option allows to ensure that the paths are working properly. If not specified, defaults to the value of specified `staging_directory` (or its default value @@ -117,7 +117,7 @@ multiple manifests you should use `manifest_file` instead. By default, Packer uses the following command (broken across multiple lines for readability) to execute Puppet: -```liquid +``` liquid cd {{.WorkingDir}} && \ {{.FacterVars}}{{if .Sudo}} sudo -E {{end}} \ {{if ne .PuppetBinDir \"\"}}{{.PuppetBinDir}}{{end}}puppet apply \ @@ -134,14 +134,14 @@ This command can be customized using the `execute_command` configuration. As you can see from the default value above, the value of this configuration can contain various template variables, defined below: -- `WorkingDir` - The path from which Puppet will be executed. -- `FacterVars` - Shell-friendly string of environmental variables used to set +- `WorkingDir` - The path from which Puppet will be executed. +- `FacterVars` - Shell-friendly string of environmental variables used to set custom facts configured for this provisioner. -- `HieraConfigPath` - The path to a hiera configuration file. -- `ManifestFile` - The path on the remote machine to the manifest file for +- `HieraConfigPath` - The path to a hiera configuration file. +- `ManifestFile` - The path on the remote machine to the manifest file for Puppet to use. -- `ModulePath` - The paths to the module directories. -- `Sudo` - A boolean of whether to `sudo` the command or not, depending on the +- `ModulePath` - The paths to the module directories. +- `Sudo` - A boolean of whether to `sudo` the command or not, depending on the value of the `prevent_sudo` configuration. ## Default Facts @@ -150,10 +150,10 @@ In addition to being able to specify custom Facter facts using the `facter` configuration, the provisioner automatically defines certain commonly useful facts: -- `packer_build_name` is set to the name of the build that Packer is running. +- `packer_build_name` is set to the name of the build that Packer is running. This is most useful when Packer is making multiple builds and you want to distinguish them in your Hiera hierarchy. -- `packer_builder_type` is the type of the builder that was used to create the +- `packer_builder_type` is the type of the builder that was used to create the machine that Puppet is running on. This is useful if you want to run only certain parts of your Puppet code on systems built with certain builders. diff --git a/website/source/docs/provisioners/puppet-server.html.md b/website/source/docs/provisioners/puppet-server.html.md index 4a8d20fe9..61a06d228 100644 --- a/website/source/docs/provisioners/puppet-server.html.md +++ b/website/source/docs/provisioners/puppet-server.html.md @@ -1,10 +1,10 @@ --- +description: | + The puppet-server Packer provisioner provisions Packer machines with Puppet + by connecting to a Puppet master. layout: docs -sidebar_current: docs-provisioners-puppet-server -page_title: Puppet Server - Provisioners -description: |- - The puppet-server Packer provisioner provisions Packer machines with Puppet - by connecting to a Puppet master. +page_title: 'Puppet Server - Provisioners' +sidebar_current: 'docs-provisioners-puppet-server' --- # Puppet Server Provisioner @@ -14,7 +14,7 @@ Type: `puppet-server` The `puppet-server` Packer provisioner provisions Packer machines with Puppet by connecting to a Puppet master. --> **Note:** Puppet will *not* be installed automatically by this +-> **Note:** Puppet will *not* be installed automatically by this provisioner. This provisioner expects that Puppet is already installed on the machine. It is common practice to use the [shell provisioner](/docs/provisioners/shell.html) before the Puppet provisioner to do @@ -25,7 +25,7 @@ this. The example below is fully functional and expects a Puppet server to be accessible from your network. -```json +``` json { "type": "puppet-server", "options": "--test --pluginsync", @@ -42,51 +42,51 @@ The reference of available configuration options is listed below. The provisioner takes various options. None are strictly required. They are listed below: -- `client_cert_path` (string) - Path to the directory on your disk that +- `client_cert_path` (string) - Path to the directory on your disk that contains the client certificate for the node. This defaults to nothing, in which case a client cert won't be uploaded. -- `client_private_key_path` (string) - Path to the directory on your disk that +- `client_private_key_path` (string) - Path to the directory on your disk that contains the client private key for the node. This defaults to nothing, in which case a client private key won't be uploaded. -- `facter` (object of key/value strings) - Additional Facter facts to make +- `facter` (object of key/value strings) - Additional Facter facts to make available to the Puppet run. -- `ignore_exit_codes` (boolean) - If true, Packer will never consider the +- `ignore_exit_codes` (boolean) - If true, Packer will never consider the provisioner a failure. -- `options` (string) - Additional command line options to pass to +- `options` (string) - Additional command line options to pass to `puppet agent` when Puppet is run. -- `prevent_sudo` (boolean) - By default, the configured commands that are +- `prevent_sudo` (boolean) - By default, the configured commands that are executed to run Puppet are executed with `sudo`. If this is true, then the sudo will be omitted. -- `puppet_node` (string) - The name of the node. If this isn't set, the fully +- `puppet_node` (string) - The name of the node. If this isn't set, the fully qualified domain name will be used. -- `puppet_server` (string) - Hostname of the Puppet server. By default +- `puppet_server` (string) - Hostname of the Puppet server. By default "puppet" will be used. -- `staging_dir` (string) - This is the directory where all the +- `staging_dir` (string) - This is the directory where all the configuration of Puppet by Packer will be placed. By default this is /tmp/packer-puppet-server. This directory doesn't need to exist but must have proper permissions so that the SSH user that Packer uses is able to create directories and write into this folder. If the permissions are not correct, use a shell provisioner prior to this to configure it properly. -- `puppet_bin_dir` (string) - The path to the directory that contains the puppet +- `puppet_bin_dir` (string) - The path to the directory that contains the puppet binary for running `puppet agent`. Usually, this would be found via the `$PATH` or `%PATH%` environment variable, but some builders (notably, the Docker one) do not run profile-setup scripts, therefore the path is usually empty. -- `execute_command` (string) - This is optional. The command used to execute Puppet. This has +- `execute_command` (string) - This is optional. The command used to execute Puppet. This has various [configuration template variables](/docs/templates/engine.html) available. See below for more information. By default, Packer uses the following command: -```liquid +``` liquid {{.FacterVars}} {{if .Sudo}} sudo -E {{end}} \ {{if ne .PuppetBinDir \"\"}}{{.PuppetBinDir}}/{{end}}puppet agent --onetime --no-daemonize \ {{if ne .PuppetServer \"\"}}--server='{{.PuppetServer}}' {{end}} \ @@ -103,10 +103,10 @@ In addition to being able to specify custom Facter facts using the `facter` configuration, the provisioner automatically defines certain commonly useful facts: -- `packer_build_name` is set to the name of the build that Packer is running. +- `packer_build_name` is set to the name of the build that Packer is running. This is most useful when Packer is making multiple builds and you want to distinguish them in your Hiera hierarchy. -- `packer_builder_type` is the type of the builder that was used to create the +- `packer_builder_type` is the type of the builder that was used to create the machine that Puppet is running on. This is useful if you want to run only certain parts of your Puppet code on systems built with certain builders. diff --git a/website/source/docs/provisioners/salt-masterless.html.md b/website/source/docs/provisioners/salt-masterless.html.md index 206571e3c..91d1fe7cc 100644 --- a/website/source/docs/provisioners/salt-masterless.html.md +++ b/website/source/docs/provisioners/salt-masterless.html.md @@ -1,10 +1,10 @@ --- +description: | + The salt-masterless Packer provisioner provisions machines built by Packer + using Salt states, without connecting to a Salt master. layout: docs -sidebar_current: docs-provisioners-salt-masterless -page_title: Salt Masterless - Provisioners -description: |- - The salt-masterless Packer provisioner provisions machines built by Packer - using Salt states, without connecting to a Salt master. +page_title: 'Salt Masterless - Provisioners' +sidebar_current: 'docs-provisioners-salt-masterless' --- # Salt Masterless Provisioner @@ -18,7 +18,7 @@ using [Salt](http://saltstack.com/) states, without connecting to a Salt master. The example below is fully functional. -```json +``` json { "type": "salt-masterless", "local_state_tree": "/Users/me/salt" @@ -32,60 +32,59 @@ required argument is the path to your local salt state tree. Optional: -- `bootstrap_args` (string) - Arguments to send to the bootstrap script. Usage +- `bootstrap_args` (string) - Arguments to send to the bootstrap script. Usage is somewhat documented on [github](https://github.com/saltstack/salt-bootstrap), but the [script itself](https://github.com/saltstack/salt-bootstrap/blob/develop/bootstrap-salt.sh) has more detailed usage instructions. By default, no arguments are sent to the script. -- `disable_sudo` (boolean) - By default, the bootstrap install command is prefixed with `sudo`. When using a +- `disable_sudo` (boolean) - By default, the bootstrap install command is prefixed with `sudo`. When using a Docker builder, you will likely want to pass `true` since `sudo` is often not pre-installed. -- `remote_pillar_roots` (string) - The path to your remote [pillar +- `remote_pillar_roots` (string) - The path to your remote [pillar roots](http://docs.saltstack.com/ref/configuration/master.html#pillar-configuration). default: `/srv/pillar`. This option cannot be used with `minion_config`. -- `remote_state_tree` (string) - The path to your remote [state +- `remote_state_tree` (string) - The path to your remote [state tree](http://docs.saltstack.com/ref/states/highstate.html#the-salt-state-tree). default: `/srv/salt`. This option cannot be used with `minion_config`. -- `local_pillar_roots` (string) - The path to your local [pillar +- `local_pillar_roots` (string) - The path to your local [pillar roots](http://docs.saltstack.com/ref/configuration/master.html#pillar-configuration). This will be uploaded to the `remote_pillar_roots` on the remote. -- `local_state_tree` (string) - The path to your local [state +- `local_state_tree` (string) - The path to your local [state tree](http://docs.saltstack.com/ref/states/highstate.html#the-salt-state-tree). This will be uploaded to the `remote_state_tree` on the remote. -- `custom_state` (string) - A state to be run instead of `state.highstate`. +- `custom_state` (string) - A state to be run instead of `state.highstate`. Defaults to `state.highstate` if unspecified. -- `minion_config` (string) - The path to your local [minion config +- `minion_config` (string) - The path to your local [minion config file](http://docs.saltstack.com/ref/configuration/minion.html). This will be uploaded to the `/etc/salt` on the remote. This option overrides the `remote_state_tree` or `remote_pillar_roots` options. -- `grains_file` (string) - The path to your local [grains file]( - https://docs.saltstack.com/en/latest/topics/grains). This will be +- `grains_file` (string) - The path to your local [grains file](https://docs.saltstack.com/en/latest/topics/grains). This will be uploaded to `/etc/salt/grains` on the remote. -- `skip_bootstrap` (boolean) - By default the salt provisioner runs [salt +- `skip_bootstrap` (boolean) - By default the salt provisioner runs [salt bootstrap](https://github.com/saltstack/salt-bootstrap) to install salt. Set this to true to skip this step. -- `temp_config_dir` (string) - Where your local state tree will be copied +- `temp_config_dir` (string) - Where your local state tree will be copied before moving to the `/srv/salt` directory. Default is `/tmp/salt`. -- `no_exit_on_failure` (boolean) - Packer will exit if the `salt-call` command +- `no_exit_on_failure` (boolean) - Packer will exit if the `salt-call` command fails. Set this option to true to ignore Salt failures. -- `log_level` (string) - Set the logging level for the `salt-call` run. +- `log_level` (string) - Set the logging level for the `salt-call` run. -- `salt_call_args` (string) - Additional arguments to pass directly to `salt-call`. See +- `salt_call_args` (string) - Additional arguments to pass directly to `salt-call`. See [salt-call](https://docs.saltstack.com/ref/cli/salt-call.html) documentation for more information. By default no additional arguments (besides the ones Packer generates) are passed to `salt-call`. -- `salt_bin_dir` (string) - Path to the `salt-call` executable. Useful if it is not +- `salt_bin_dir` (string) - Path to the `salt-call` executable. Useful if it is not on the PATH. diff --git a/website/source/docs/provisioners/shell-local.html.md b/website/source/docs/provisioners/shell-local.html.md index c9ce12c1f..bb9022453 100644 --- a/website/source/docs/provisioners/shell-local.html.md +++ b/website/source/docs/provisioners/shell-local.html.md @@ -1,11 +1,11 @@ --- +description: | + The shell Packer provisioner provisions machines built by Packer using shell + scripts. Shell provisioning is the easiest way to get software installed and + configured on a machine. layout: docs -sidebar_current: docs-provisioners-shell-local -page_title: Shell (Local) - Provisioners -description: |- - The shell Packer provisioner provisions machines built by Packer using shell - scripts. Shell provisioning is the easiest way to get software installed and - configured on a machine. +page_title: 'Shell (Local) - Provisioners' +sidebar_current: 'docs-provisioners-shell-local' --- # Local Shell Provisioner @@ -20,7 +20,7 @@ shell scripts on a remote machine. The example below is fully functional. -```json +``` json { "type": "shell-local", "command": "echo foo" @@ -34,12 +34,12 @@ required element is "command". Required: -- `command` (string) - The command to execute. This will be executed within +- `command` (string) - The command to execute. This will be executed within the context of a shell as specified by `execute_command`. Optional parameters: -- `execute_command` (array of strings) - The command to use to execute +- `execute_command` (array of strings) - The command to use to execute the script. By default this is `["/bin/sh", "-c", "{{.Command}}"]`. The value is an array of arguments executed directly by the OS. The value of this is treated as [configuration diff --git a/website/source/docs/provisioners/shell.html.md b/website/source/docs/provisioners/shell.html.md index 160e4143d..23e7a0d6e 100644 --- a/website/source/docs/provisioners/shell.html.md +++ b/website/source/docs/provisioners/shell.html.md @@ -1,11 +1,11 @@ --- +description: | + The shell Packer provisioner provisions machines built by Packer using shell + scripts. Shell provisioning is the easiest way to get software installed and + configured on a machine. layout: docs -sidebar_current: docs-provisioners-shell-remote -page_title: Shell - Provisioners -description: |- - The shell Packer provisioner provisions machines built by Packer using shell - scripts. Shell provisioning is the easiest way to get software installed and - configured on a machine. +page_title: 'Shell - Provisioners' +sidebar_current: 'docs-provisioners-shell-remote' --- # Shell Provisioner @@ -16,7 +16,7 @@ The shell Packer provisioner provisions machines built by Packer using shell scripts. Shell provisioning is the easiest way to get software installed and configured on a machine. --> **Building Windows images?** You probably want to use the +-> **Building Windows images?** You probably want to use the [PowerShell](/docs/provisioners/powershell.html) or [Windows Shell](/docs/provisioners/windows-shell.html) provisioners. @@ -24,7 +24,7 @@ Shell](/docs/provisioners/windows-shell.html) provisioners. The example below is fully functional. -```json +``` json { "type": "shell", "inline": ["echo foo"] @@ -38,66 +38,66 @@ required element is either "inline" or "script". Every other option is optional. Exactly *one* of the following is required: -- `inline` (array of strings) - This is an array of commands to execute. The +- `inline` (array of strings) - This is an array of commands to execute. The commands are concatenated by newlines and turned into a single file, so they are all executed within the same context. This allows you to change directories in one command and use something in the directory in the next and so on. Inline scripts are the easiest way to pull off simple tasks within the machine. -- `script` (string) - The path to a script to upload and execute in +- `script` (string) - The path to a script to upload and execute in the machine. This path can be absolute or relative. If it is relative, it is relative to the working directory when Packer is executed. -- `scripts` (array of strings) - An array of scripts to execute. The scripts +- `scripts` (array of strings) - An array of scripts to execute. The scripts will be uploaded and executed in the order specified. Each script is executed in isolation, so state such as variables from one script won't carry on to the next. Optional parameters: -- `binary` (boolean) - If true, specifies that the script(s) are binary files, +- `binary` (boolean) - If true, specifies that the script(s) are binary files, and Packer should therefore not convert Windows line endings to Unix line endings (if there are any). By default this is false. -- `environment_vars` (array of strings) - An array of key/value pairs to +- `environment_vars` (array of strings) - An array of key/value pairs to inject prior to the execute\_command. The format should be `key=value`. Packer injects some environmental variables by default into the environment, as well, which are covered in the section below. -- `execute_command` (string) - The command to use to execute the script. By +- `execute_command` (string) - The command to use to execute the script. By default this is `chmod +x {{ .Path }}; env {{ .Vars }} {{ .Path }}`. The value of this is treated as [configuration template](/docs/templates/engine.html). There are two available variables: `Path`, which is the path to the script to run, and `Vars`, which is the list of `environment_vars`, if configured. -- `expect_disconnect` (bool) - Defaults to true. Whether to error if the +- `expect_disconnect` (bool) - Defaults to true. Whether to error if the server disconnects us. A disconnect might happen if you restart the ssh server or reboot the host. May default to false in the future. -- `inline_shebang` (string) - The +- `inline_shebang` (string) - The [shebang](https://en.wikipedia.org/wiki/Shebang_%28Unix%29) value to use when running commands specified by `inline`. By default, this is `/bin/sh -e`. If you're not using `inline`, then this configuration has no effect. **Important:** If you customize this, be sure to include something like the `-e` flag, otherwise individual steps failing won't fail the provisioner. -- `remote_folder` (string) - The folder where the uploaded script will reside on +- `remote_folder` (string) - The folder where the uploaded script will reside on the machine. This defaults to '/tmp'. -- `remote_file` (string) - The filename the uploaded script will have on the machine. - This defaults to 'script_nnn.sh'. +- `remote_file` (string) - The filename the uploaded script will have on the machine. + This defaults to 'script\_nnn.sh'. -- `remote_path` (string) - The full path to the uploaded script will have on the - machine. By default this is remote_folder/remote_file, if set this option will - override both remote_folder and remote_file. +- `remote_path` (string) - The full path to the uploaded script will have on the + machine. By default this is remote\_folder/remote\_file, if set this option will + override both remote\_folder and remote\_file. -- `skip_clean` (boolean) - If true, specifies that the helper scripts +- `skip_clean` (boolean) - If true, specifies that the helper scripts uploaded to the system will not be removed by Packer. This defaults to false (clean scripts from the system). -- `start_retry_timeout` (string) - The amount of time to attempt to *start* +- `start_retry_timeout` (string) - The amount of time to attempt to *start* the remote process. By default this is `5m` or 5 minutes. This setting exists in order to deal with times when SSH may restart, such as a system reboot. Set this to a higher value if reboots take a longer amount @@ -116,7 +116,7 @@ Some operating systems default to a non-root user. For example if you login as `ubuntu` and can sudo using the password `packer`, then you'll want to change `execute_command` to be: -```text +``` text "echo 'packer' | sudo -S sh -c '{{ .Vars }} {{ .Path }}'" ``` @@ -131,7 +131,7 @@ privileges without worrying about password prompts. The following contrived example shows how to pass environment variables and change the permissions of the script to be executed: -```text +``` text chmod +x {{ .Path }}; chmod 0700 {{ .Path}}; env {{ .Vars }} {{ .Path }} ``` @@ -141,15 +141,15 @@ In addition to being able to specify custom environmental variables using the `environment_vars` configuration, the provisioner automatically defines certain commonly useful environmental variables: -- `PACKER_BUILD_NAME` is set to the name of the build that Packer is running. +- `PACKER_BUILD_NAME` is set to the name of the build that Packer is running. This is most useful when Packer is making multiple builds and you want to distinguish them slightly from a common provisioning script. -- `PACKER_BUILDER_TYPE` is the type of the builder that was used to create the +- `PACKER_BUILDER_TYPE` is the type of the builder that was used to create the machine that the script is running on. This is useful if you want to run only certain parts of the script on systems built with certain builders. -- `PACKER_HTTP_ADDR` If using a builder that provides an http server for file +- `PACKER_HTTP_ADDR` If using a builder that provides an http server for file transfer (such as hyperv, parallels, qemu, virtualbox, and vmware), this will be set to the address. You can use this address in your provisioner to download large files over http. This may be useful if you're experiencing @@ -168,9 +168,9 @@ scripts. The amount of time the provisioner will wait is configured using Sometimes, when executing a command like `reboot`, the shell script will return and Packer will start executing the next one before SSH actually quits and the -machine restarts. For this, put use "pause_before" to make Packer wait before executing the next script: +machine restarts. For this, put use "pause\_before" to make Packer wait before executing the next script: -```json +``` json { "type": "shell", "script": "script.sh", @@ -183,7 +183,7 @@ causing the provisioner to hang despite a reboot occurring. In this case, make sure you shut down the network interfaces on reboot or in your shell script. For example, on Gentoo: -```text +``` text /etc/init.d/net.eth0 stop ``` @@ -203,7 +203,7 @@ provisioner](/docs/provisioners/file.html) (more secure) or using `ssh-keyscan` to populate the file (less secure). An example of the latter accessing github would be: -```json +``` json { "type": "shell", "inline": [ @@ -218,7 +218,7 @@ would be: *My shell script doesn't work correctly on Ubuntu* -- On Ubuntu, the `/bin/sh` shell is +- On Ubuntu, the `/bin/sh` shell is [dash](https://en.wikipedia.org/wiki/Debian_Almquist_shell). If your script has [bash](https://en.wikipedia.org/wiki/Bash_(Unix_shell))-specific commands in it, then put `#!/bin/bash -e` at the top of your script. @@ -227,26 +227,26 @@ would be: *My shell works when I login but fails with the shell provisioner* -- See the above tip. More than likely, your login shell is using `/bin/bash` +- See the above tip. More than likely, your login shell is using `/bin/bash` while the provisioner is using `/bin/sh`. *My installs hang when using `apt-get` or `yum`* -- Make sure you add a `-y` to the command to prevent it from requiring user +- Make sure you add a `-y` to the command to prevent it from requiring user input before proceeding. *How do I tell what my shell script is doing?* -- Adding a `-x` flag to the shebang at the top of the script (`#!/bin/sh -x`) +- Adding a `-x` flag to the shebang at the top of the script (`#!/bin/sh -x`) will echo the script statements as it is executing. *My builds don't always work the same* -- Some distributions start the SSH daemon before other core services which can +- Some distributions start the SSH daemon before other core services which can create race conditions. Your first provisioner can tell the machine to wait until it completely boots. -```json +``` json { "type": "shell", "inline": [ "sleep 10" ] diff --git a/website/source/docs/provisioners/windows-restart.html.md b/website/source/docs/provisioners/windows-restart.html.md index 1612a1473..450977b55 100644 --- a/website/source/docs/provisioners/windows-restart.html.md +++ b/website/source/docs/provisioners/windows-restart.html.md @@ -1,10 +1,10 @@ --- +description: | + The Windows restart provisioner restarts a Windows machine and waits for it to + come back up. layout: docs -sidebar_current: docs-provisioners-windows-restart -page_title: Windows Restart - Provisioners -description: |- - The Windows restart provisioner restarts a Windows machine and waits for it to - come back up. +page_title: 'Windows Restart - Provisioners' +sidebar_current: 'docs-provisioners-windows-restart' --- # Windows Restart Provisioner @@ -25,7 +25,7 @@ through the Windows Remote Management (WinRM) service, not by ACPI functions, so The example below is fully functional. -```json +``` json { "type": "windows-restart" } @@ -37,15 +37,14 @@ The reference of available configuration options is listed below. Optional parameters: -- `restart_command` (string) - The command to execute to initiate the - restart. By default this is `shutdown /r /c "packer restart" /t 5 && net - stop winrm`. A key action of this is to stop WinRM so that Packer can +- `restart_command` (string) - The command to execute to initiate the + restart. By default this is `shutdown /r /c "packer restart" /t 5 && net stop winrm`. A key action of this is to stop WinRM so that Packer can detect it is rebooting. -- `restart_check_command` (string) - A command to execute to check if the +- `restart_check_command` (string) - A command to execute to check if the restart succeeded. This will be done in a loop. -- `restart_timeout` (string) - The timeout to wait for the restart. By +- `restart_timeout` (string) - The timeout to wait for the restart. By default this is 5 minutes. Example value: `5m`. If you are installing updates or have a lot of startup services, you will probably need to increase this duration. diff --git a/website/source/docs/provisioners/windows-shell.html.md b/website/source/docs/provisioners/windows-shell.html.md index 702e759ba..cdf4469fa 100644 --- a/website/source/docs/provisioners/windows-shell.html.md +++ b/website/source/docs/provisioners/windows-shell.html.md @@ -1,10 +1,10 @@ --- +description: | + The windows-shell Packer provisioner runs commands on Windows using the cmd + shell. layout: docs -sidebar_current: docs-provisioners-windows-shell -page_title: Windows Shell - Provisioners -description: |- - The windows-shell Packer provisioner runs commands on Windows using the cmd - shell. +page_title: 'Windows Shell - Provisioners' +sidebar_current: 'docs-provisioners-windows-shell' --- # Windows Shell Provisioner @@ -18,7 +18,7 @@ The windows-shell Packer provisioner runs commands on a Windows machine using The example below is fully functional. -```json +``` json { "type": "windows-shell", "inline": ["dir c:\\"] @@ -32,65 +32,64 @@ required element is either "inline" or "script". Every other option is optional. Exactly *one* of the following is required: -- `inline` (array of strings) - This is an array of commands to execute. The +- `inline` (array of strings) - This is an array of commands to execute. The commands are concatenated by newlines and turned into a single file, so they are all executed within the same context. This allows you to change directories in one command and use something in the directory in the next and so on. Inline scripts are the easiest way to pull off simple tasks within the machine. -- `script` (string) - The path to a script to upload and execute in +- `script` (string) - The path to a script to upload and execute in the machine. This path can be absolute or relative. If it is relative, it is relative to the working directory when Packer is executed. -- `scripts` (array of strings) - An array of scripts to execute. The scripts +- `scripts` (array of strings) - An array of scripts to execute. The scripts will be uploaded and executed in the order specified. Each script is executed in isolation, so state such as variables from one script won't carry on to the next. Optional parameters: -- `binary` (boolean) - If true, specifies that the script(s) are binary files, +- `binary` (boolean) - If true, specifies that the script(s) are binary files, and Packer should therefore not convert Windows line endings to Unix line endings (if there are any). By default this is false. -- `environment_vars` (array of strings) - An array of key/value pairs to +- `environment_vars` (array of strings) - An array of key/value pairs to inject prior to the execute\_command. The format should be `key=value`. Packer injects some environmental variables by default into the environment, as well, which are covered in the section below. -- `execute_command` (string) - The command to use to execute the script. By +- `execute_command` (string) - The command to use to execute the script. By default this is `{{ .Vars }}"{{ .Path }}"`. The value of this is treated as [template engine](/docs/templates/engine.html). There are two available variables: `Path`, which is the path to the script to run, and `Vars`, which is the list of `environment_vars`, if configured. -- `remote_path` (string) - The path where the script will be uploaded to in +- `remote_path` (string) - The path where the script will be uploaded to in the machine. This defaults to "c:/Windows/Temp/script.bat". This value must be a writable location and any parent directories must already exist. -- `start_retry_timeout` (string) - The amount of time to attempt to *start* +- `start_retry_timeout` (string) - The amount of time to attempt to *start* the remote process. By default this is "5m" or 5 minutes. This setting exists in order to deal with times when SSH may restart, such as a system reboot. Set this to a higher value if reboots take a longer amount of time. - ## Default Environmental Variables In addition to being able to specify custom environmental variables using the `environment_vars` configuration, the provisioner automatically defines certain commonly useful environmental variables: -- `PACKER_BUILD_NAME` is set to the name of the build that Packer is running. +- `PACKER_BUILD_NAME` is set to the name of the build that Packer is running. This is most useful when Packer is making multiple builds and you want to distinguish them slightly from a common provisioning script. -- `PACKER_BUILDER_TYPE` is the type of the builder that was used to create the +- `PACKER_BUILDER_TYPE` is the type of the builder that was used to create the machine that the script is running on. This is useful if you want to run only certain parts of the script on systems built with certain builders. -- `PACKER_HTTP_ADDR` If using a builder that provides an http server for file +- `PACKER_HTTP_ADDR` If using a builder that provides an http server for file transfer (such as hyperv, parallels, qemu, virtualbox, and vmware), this will be set to the address. You can use this address in your provisioner to download large files over http. This may be useful if you're experiencing diff --git a/website/source/docs/templates/builders.html.md b/website/source/docs/templates/builders.html.md index 25e2fd5cf..758305b8c 100644 --- a/website/source/docs/templates/builders.html.md +++ b/website/source/docs/templates/builders.html.md @@ -1,10 +1,10 @@ --- +description: | + Within the template, the builders section contains an array of all the + builders that Packer should use to generate machine images for the template. layout: docs -sidebar_current: docs-templates-builders -page_title: Builders - Templates -description: |- - Within the template, the builders section contains an array of all the - builders that Packer should use to generate machine images for the template. +page_title: 'Builders - Templates' +sidebar_current: 'docs-templates-builders' --- # Template Builders @@ -23,7 +23,7 @@ referenced from the documentation for that specific builder. Within a template, a section of builder definitions looks like this: -```json +``` json { "builders": [ // ... one or more builder definitions here @@ -45,7 +45,7 @@ These are placed directly within the builder definition. An example builder definition is shown below, in this case configuring the AWS builder: -```json +``` json { "type": "amazon-ebs", "access_key": "...", diff --git a/website/source/docs/templates/communicator.html.md b/website/source/docs/templates/communicator.html.md index 1743eba49..07d6229ed 100644 --- a/website/source/docs/templates/communicator.html.md +++ b/website/source/docs/templates/communicator.html.md @@ -1,10 +1,10 @@ --- +description: | + Communicators are the mechanism Packer uses to upload files, execute scripts, + etc. with the machine being created. layout: docs -sidebar_current: docs-templates-communicators -page_title: Communicators - Templates -description: |- - Communicators are the mechanism Packer uses to upload files, execute scripts, - etc. with the machine being created. +page_title: 'Communicators - Templates' +sidebar_current: 'docs-templates-communicators' --- # Template Communicators @@ -15,13 +15,13 @@ scripts, etc. with the machine being created. Communicators are configured within the [builder](/docs/templates/builders.html) section. Packer currently supports three kinds of communicators: -- `none` - No communicator will be used. If this is set, most provisioners - also can't be used. +- `none` - No communicator will be used. If this is set, most provisioners + also can't be used. -- `ssh` - An SSH connection will be established to the machine. This is - usually the default. +- `ssh` - An SSH connection will be established to the machine. This is + usually the default. -- `winrm` - A WinRM connection will be established. +- `winrm` - A WinRM connection will be established. In addition to the above, some builders have custom communicators they can use. For example, the Docker builder has a "docker" communicator that uses @@ -36,7 +36,7 @@ configure everything. However, to specify a communicator, you set the `communicator` key within a build. Multiple builds can have different communicators. Example: -```json +``` json { "builders": [ { @@ -58,77 +58,77 @@ the SSH agent to the remote host. The SSH communicator has the following options: -- `ssh_bastion_agent_auth` (boolean) - If true, the local SSH agent will +- `ssh_bastion_agent_auth` (boolean) - If true, the local SSH agent will be used to authenticate with the bastion host. Defaults to false. -- `ssh_bastion_host` (string) - A bastion host to use for the actual +- `ssh_bastion_host` (string) - A bastion host to use for the actual SSH connection. -- `ssh_bastion_password` (string) - The password to use to authenticate +- `ssh_bastion_password` (string) - The password to use to authenticate with the bastion host. -- `ssh_bastion_port` (integer) - The port of the bastion host. Defaults to - 22. +- `ssh_bastion_port` (integer) - The port of the bastion host. Defaults to + 1. -- `ssh_bastion_private_key_file` (string) - A private key file to use +- `ssh_bastion_private_key_file` (string) - A private key file to use to authenticate with the bastion host. -- `ssh_bastion_username` (string) - The username to connect to the bastion +- `ssh_bastion_username` (string) - The username to connect to the bastion host. -- `ssh_disable_agent` (boolean) - If true, SSH agent forwarding will be +- `ssh_disable_agent` (boolean) - If true, SSH agent forwarding will be disabled. Defaults to false. -- `ssh_file_transfer_method` (`scp` or `sftp`) - How to transfer files, Secure +- `ssh_file_transfer_method` (`scp` or `sftp`) - How to transfer files, Secure copy (default) or SSH File Transfer Protocol. -- `ssh_handshake_attempts` (integer) - The number of handshakes to attempt +- `ssh_handshake_attempts` (integer) - The number of handshakes to attempt with SSH once it can connect. This defaults to 10. -- `ssh_host` (string) - The address to SSH to. This usually is automatically +- `ssh_host` (string) - The address to SSH to. This usually is automatically configured by the builder. -- `ssh_password` (string) - A plaintext password to use to authenticate +- `ssh_password` (string) - A plaintext password to use to authenticate with SSH. -- `ssh_port` (integer) - The port to connect to SSH. This defaults to 22. +- `ssh_port` (integer) - The port to connect to SSH. This defaults to 22. -- `ssh_private_key_file` (string) - Path to a PEM encoded private key +- `ssh_private_key_file` (string) - Path to a PEM encoded private key file to use to authentiate with SSH. -- `ssh_pty` (boolean) - If true, a PTY will be requested for the SSH +- `ssh_pty` (boolean) - If true, a PTY will be requested for the SSH connection. This defaults to false. -- `ssh_timeout` (string) - The time to wait for SSH to become available. +- `ssh_timeout` (string) - The time to wait for SSH to become available. Packer uses this to determine when the machine has booted so this is usually quite long. Example value: "10m" -- `ssh_username` (string) - The username to connect to SSH with. Required +- `ssh_username` (string) - The username to connect to SSH with. Required if using SSH. ## WinRM Communicator The WinRM communicator has the following options. -- `winrm_host` (string) - The address for WinRM to connect to. +- `winrm_host` (string) - The address for WinRM to connect to. -- `winrm_port` (integer) - The WinRM port to connect to. This defaults to +- `winrm_port` (integer) - The WinRM port to connect to. This defaults to 5985 for plain unencrypted connection and 5986 for SSL when `winrm_use_ssl` is set to true. -- `winrm_username` (string) - The username to use to connect to WinRM. +- `winrm_username` (string) - The username to use to connect to WinRM. -- `winrm_password` (string) - The password to use to connect to WinRM. +- `winrm_password` (string) - The password to use to connect to WinRM. -- `winrm_timeout` (string) - The amount of time to wait for WinRM to +- `winrm_timeout` (string) - The amount of time to wait for WinRM to become available. This defaults to "30m" since setting up a Windows machine generally takes a long time. -- `winrm_use_ssl` (boolean) - If true, use HTTPS for WinRM +- `winrm_use_ssl` (boolean) - If true, use HTTPS for WinRM -- `winrm_insecure` (boolean) - If true, do not check server certificate +- `winrm_insecure` (boolean) - If true, do not check server certificate chain and host name -- `winrm_use_ntlm` (boolean) - If true, NTLM authentication will be used for WinRM, - rather than default (basic authentication), removing the requirement for basic - authentication to be enabled within the target guest. Further reading for remote - connection authentication can be found [here](https://msdn.microsoft.com/en-us/library/aa384295(v=vs.85).aspx). \ No newline at end of file +- `winrm_use_ntlm` (boolean) - If true, NTLM authentication will be used for WinRM, + rather than default (basic authentication), removing the requirement for basic + authentication to be enabled within the target guest. Further reading for remote + connection authentication can be found [here](https://msdn.microsoft.com/en-us/library/aa384295(v=vs.85).aspx). diff --git a/website/source/docs/templates/engine.html.md b/website/source/docs/templates/engine.html.md index 7512af20f..06f5d277d 100644 --- a/website/source/docs/templates/engine.html.md +++ b/website/source/docs/templates/engine.html.md @@ -1,11 +1,11 @@ --- +description: | + All strings within templates are processed by a common Packer templating + engine, where variables and functions can be used to modify the value of a + configuration parameter at runtime. layout: docs -sidebar_current: docs-templates-engine -page_title: Template Engine - Templates -description: |- - All strings within templates are processed by a common Packer templating - engine, where variables and functions can be used to modify the value of a - configuration parameter at runtime. +page_title: 'Template Engine - Templates' +sidebar_current: 'docs-templates-engine' --- # Template Engine @@ -16,46 +16,45 @@ configuration parameter at runtime. The syntax of templates uses the following conventions: -* Anything template related happens within double-braces: `{{ }}`. -* Functions are specified directly within the braces, such as `{{timestamp}}`. -* Template variables are prefixed with a period and capitalized, such as - `{{.Variable}}`. +- Anything template related happens within double-braces: `{{ }}`. +- Functions are specified directly within the braces, such as `{{timestamp}}`. +- Template variables are prefixed with a period and capitalized, such as + `{{.Variable}}`. ## Functions Functions perform operations on and within strings, for example the `{{timestamp}}` function can be used in any string to generate the current timestamp. This is useful for configurations that require unique -keys, such as AMI names. By setting the AMI name to something like `My Packer -AMI {{timestamp}}`, the AMI name will be unique down to the second. If you +keys, such as AMI names. By setting the AMI name to something like `My Packer AMI {{timestamp}}`, the AMI name will be unique down to the second. If you need greater than one second granularity, you should use `{{uuid}}`, for example when you have multiple builders in the same template. Here is a full list of the available functions for reference. -- `build_name` - The name of the build being run. -- `build_type` - The type of the builder being used currently. -- `isotime [FORMAT]` - UTC time, which can be +- `build_name` - The name of the build being run. +- `build_type` - The type of the builder being used currently. +- `isotime [FORMAT]` - UTC time, which can be [formatted](https://golang.org/pkg/time/#example_Time_Format). See more examples below in [the `isotime` format reference](/docs/templates/engine.html#isotime-function-format-reference). -- `lower` - Lowercases the string. -- `pwd` - The working directory while executing Packer. -- `template_dir` - The directory to the template for the build. -- `timestamp` - The current Unix timestamp in UTC. -- `uuid` - Returns a random UUID. -- `upper` - Uppercases the string. -- `user` - Specifies a user variable. +- `lower` - Lowercases the string. +- `pwd` - The working directory while executing Packer. +- `template_dir` - The directory to the template for the build. +- `timestamp` - The current Unix timestamp in UTC. +- `uuid` - Returns a random UUID. +- `upper` - Uppercases the string. +- `user` - Specifies a user variable. #### Specific to Amazon builders: -- `clean_ami_name` - AMI names can only contain certain characters. This - function will replace illegal characters with a '-" character. Example usage - since ":" is not a legal AMI name is: `{{isotime | clean_ami_name}}`. +- `clean_ami_name` - AMI names can only contain certain characters. This + function will replace illegal characters with a '-" character. Example usage + since ":" is not a legal AMI name is: `{{isotime | clean_ami_name}}`. ## Template variables Template variables are special variables automatically set by Packer at build time. Some builders, provisioners and other components have template variables that are available only for that component. Template variables are recognizable because they're prefixed by a period, such as `{{ .Name }}`. For example, when using the [`shell`](/docs/builders/vmware-iso.html) builder template variables are available to customize the [`execute_command`](/docs/provisioners/shell.html#execute_command) parameter used to determine how Packer will run the shell command. -```liquid +``` liquid { "provisioners": [ { @@ -71,7 +70,7 @@ Template variables are special variables automatically set by Packer at build ti The `{{ .Vars }}` and `{{ .Path }}` template variables will be replaced with the list of the environment variables and the path to the script to be executed respectively. --> **Note:** In addition to template variables, you can specify your own user variables. See the [user variable](/docs/templates/user-variables.html) documentation for more information on user variables. +-> **Note:** In addition to template variables, you can specify your own user variables. See the [user variable](/docs/templates/user-variables.html) documentation for more information on user variables. # isotime Function Format Reference @@ -168,14 +167,13 @@ Formatting for the function `isotime` uses the magic reference date **Mon Jan 2 - *The values in parentheses are the abbreviated, or 24-hour clock values* Note that "-0700" is always formatted into "+0000" because `isotime` is always UTC time. Here are some example formatted time, using the above format options: -```liquid +``` liquid isotime = June 7, 7:22:43pm 2014 {{isotime "2006-01-02"}} = 2014-06-07 @@ -186,7 +184,7 @@ isotime = June 7, 7:22:43pm 2014 Please note that double quote characters need escaping inside of templates (in this case, on the `ami_name` value): -```json +``` json { "builders": [ { @@ -203,4 +201,4 @@ Please note that double quote characters need escaping inside of templates (in t } ``` --> **Note:** See the [Amazon builder](/docs/builders/amazon.html) documentation for more information on how to correctly configure the Amazon builder in this example. +-> **Note:** See the [Amazon builder](/docs/builders/amazon.html) documentation for more information on how to correctly configure the Amazon builder in this example. diff --git a/website/source/docs/templates/index.html.md b/website/source/docs/templates/index.html.md index d78c28597..40dc310af 100644 --- a/website/source/docs/templates/index.html.md +++ b/website/source/docs/templates/index.html.md @@ -1,13 +1,13 @@ --- +description: | + Templates are JSON files that configure the various components of Packer in + order to create one or more machine images. Templates are portable, static, + and readable and writable by both humans and computers. This has the added + benefit of being able to not only create and modify templates by hand, but + also write scripts to dynamically create or modify templates. layout: docs page_title: Templates -sidebar_current: docs-templates -description: |- - Templates are JSON files that configure the various components of Packer in - order to create one or more machine images. Templates are portable, static, - and readable and writable by both humans and computers. This has the added - benefit of being able to not only create and modify templates by hand, but - also write scripts to dynamically create or modify templates. +sidebar_current: 'docs-templates' --- # Templates @@ -28,37 +28,37 @@ A template is a JSON object that has a set of keys configuring various components of Packer. The available keys within a template are listed below. Along with each key, it is noted whether it is required or not. -- `builders` (*required*) is an array of one or more objects that defines the +- `builders` (*required*) is an array of one or more objects that defines the builders that will be used to create machine images for this template, and configures each of those builders. For more information on how to define and configure a builder, read the sub-section on [configuring builders in templates](/docs/templates/builders.html). -- `description` (optional) is a string providing a description of what the +- `description` (optional) is a string providing a description of what the template does. This output is used only in the [inspect command](/docs/commands/inspect.html). -- `min_packer_version` (optional) is a string that has a minimum Packer +- `min_packer_version` (optional) is a string that has a minimum Packer version that is required to parse the template. This can be used to ensure that proper versions of Packer are used with the template. A max version can't be specified because Packer retains backwards compatibility with `packer fix`. -- `post-processors` (optional) is an array of one or more objects that defines +- `post-processors` (optional) is an array of one or more objects that defines the various post-processing steps to take with the built images. If not specified, then no post-processing will be done. For more information on what post-processors do and how they're defined, read the sub-section on [configuring post-processors in templates](/docs/templates/post-processors.html). -- `provisioners` (optional) is an array of one or more objects that defines +- `provisioners` (optional) is an array of one or more objects that defines the provisioners that will be used to install and configure software for the machines created by each of the builders. If it is not specified, then no provisioners will be run. For more information on how to define and configure a provisioner, read the sub-section on [configuring provisioners in templates](/docs/templates/provisioners.html). -- `variables` (optional) is an object of one or more key/value strings that +- `variables` (optional) is an object of one or more key/value strings that defines user variables contained in the template. If it is not specified, then no variables are defined. For more information on how to define and use user variables, read the sub-section on [user variables in @@ -70,7 +70,7 @@ JSON doesn't support comments and Packer reports unknown keys as validation errors. If you'd like to comment your template, you can prefix a *root level* key with an underscore. Example: -```json +``` json { "_comment": "This is a comment", "builders": [ @@ -86,9 +86,9 @@ builders, provisioners, etc. will still result in validation errors. Below is an example of a basic template that could be invoked with `packer build`. It would create an instance in AWS, and once running copy a script to it and run that script using SSH. --> **Note:** This example requires an account with Amazon Web Services. There are a number of parameters which need to be provided for a functional build to take place. See the [Amazon builder](/docs/builders/amazon.html) documentation for more information. +-> **Note:** This example requires an account with Amazon Web Services. There are a number of parameters which need to be provided for a functional build to take place. See the [Amazon builder](/docs/builders/amazon.html) documentation for more information. -```json +``` json { "builders": [ { diff --git a/website/source/docs/templates/post-processors.html.md b/website/source/docs/templates/post-processors.html.md index 4b354a085..3acda145e 100644 --- a/website/source/docs/templates/post-processors.html.md +++ b/website/source/docs/templates/post-processors.html.md @@ -1,11 +1,11 @@ --- +description: | + The post-processor section within a template configures any post-processing + that will be done to images built by the builders. Examples of post-processing + would be compressing files, uploading artifacts, etc. layout: docs -sidebar_current: docs-templates-post-processors -page_title: Post-Processors - Templates -description: |- - The post-processor section within a template configures any post-processing - that will be done to images built by the builders. Examples of post-processing - would be compressing files, uploading artifacts, etc. +page_title: 'Post-Processors - Templates' +sidebar_current: 'docs-templates-post-processors' --- # Template Post-Processors @@ -25,7 +25,7 @@ post-processor. Within a template, a section of post-processor definitions looks like this: -```json +``` json { "post-processors": [ // ... one or more post-processor definitions here @@ -51,7 +51,7 @@ A **simple definition** is just a string; the name of the post-processor. An example is shown below. Simple definitions are used when no additional configuration is needed for the post-processor. -```json +``` json { "post-processors": ["compress"] } @@ -63,7 +63,7 @@ post-processor, but may also contain additional configuration for the post-processor. A detailed definition is used when additional configuration is needed beyond simply the type for the post-processor. An example is shown below. -```json +``` json { "post-processors": [ { @@ -84,7 +84,7 @@ compressed then uploaded, but the compressed result is not kept. It is very important that any post processors that need to be run in order, be sequenced! -```json +``` json { "post-processors": [ [ @@ -102,7 +102,7 @@ simply shortcuts for a **sequence** definition of only one element. It is important to sequence post processors when creating and uploading vagrant boxes to Atlas via Packer. Using a sequence will ensure that the post processors are ran in order and creates the vagrant box prior to uploading the box to Atlas. -```json +``` json { "post-processors": [ [ @@ -138,7 +138,7 @@ In some cases, however, you may want to keep the intermediary artifacts. You can tell Packer to keep these artifacts by setting the `keep_input_artifact` configuration to `true`. An example is shown below: -```json +``` json { "post-processors": [ { @@ -154,7 +154,7 @@ post-processor. If you're specifying a sequence of post-processors, then all intermediaries are discarded by default except for the input artifacts to post-processors that explicitly state to keep the input artifact. --> **Note:** The intuitive reader may be wondering what happens if multiple +-> **Note:** The intuitive reader may be wondering what happens if multiple post-processors are specified (not in a sequence). Does Packer require the configuration to keep the input artifact on all the post-processors? The answer is no, of course not. Packer is smart enough to figure out that at least one @@ -172,7 +172,7 @@ effectively the same. `only` and `except` can only be specified on "detailed" configurations. If you have a sequence of post-processors to run, `only` and `except` will only affect that single post-processor in the sequence. -```json +``` json { "type": "vagrant", "only": ["virtualbox-iso"] diff --git a/website/source/docs/templates/provisioners.html.md b/website/source/docs/templates/provisioners.html.md index 2cbf3cbe4..4b4ef7f4d 100644 --- a/website/source/docs/templates/provisioners.html.md +++ b/website/source/docs/templates/provisioners.html.md @@ -1,11 +1,11 @@ --- +description: | + Within the template, the provisioners section contains an array of all the + provisioners that Packer should use to install and configure software within + running machines prior to turning them into machine images. layout: docs -sidebar_current: docs-templates-provisioners -page_title: Provisioners - Templates -description: |- - Within the template, the provisioners section contains an array of all the - provisioners that Packer should use to install and configure software within - running machines prior to turning them into machine images. +page_title: 'Provisioners - Templates' +sidebar_current: 'docs-templates-provisioners' --- # Template Provisioners @@ -25,7 +25,7 @@ be referenced from the documentation for that specific provisioner. Within a template, a section of provisioner definitions looks like this: -```json +``` json { "provisioners": [ // ... one or more provisioner definitions here @@ -50,7 +50,7 @@ specifies a path to a shell script to execute within the machines being created. An example provisioner definition is shown below, configuring the shell provisioner to run a local script within the machines: -```json +``` json { "type": "shell", "script": "script.sh" @@ -67,7 +67,7 @@ provisioner on anything other than the specified builds. An example of `only` being used is shown below, but the usage of `except` is effectively the same: -```json +``` json { "type": "shell", "script": "script.sh", @@ -97,7 +97,7 @@ identical. However, they may initially need to be run differently. This example is shown below: -```json +``` json { "type": "shell", "script": "script.sh", @@ -126,7 +126,7 @@ Every provisioner definition in a Packer template can take a special configuration `pause_before` that is the amount of time to pause before running that provisioner. By default, there is no pause. An example is shown below: -```json +``` json { "type": "shell", "script": "script.sh", diff --git a/website/source/docs/templates/push.html.md b/website/source/docs/templates/push.html.md index c35ff29c4..255f3d842 100644 --- a/website/source/docs/templates/push.html.md +++ b/website/source/docs/templates/push.html.md @@ -1,10 +1,10 @@ --- +description: | + Within the template, the push section configures how a template can be pushed + to a remote build service. layout: docs -sidebar_current: docs-templates-push -page_title: Push - Templates -description: |- - Within the template, the push section configures how a template can be pushed - to a remote build service. +page_title: 'Push - Templates' +sidebar_current: 'docs-templates-push' --- # Template Push @@ -22,7 +22,7 @@ services will come in the form of plugins in the future. Within a template, a push configuration section looks like this: -```json +``` json { "push": { // ... push configuration here @@ -38,30 +38,30 @@ each category, the available configuration keys are alphabetized. ### Required -- `name` (string) - Name of the build configuration in the build service. If +- `name` (string) - Name of the build configuration in the build service. If this doesn't exist, it will be created (by default). Note that the name cannot contain dots. `[a-zA-Z0-9-_/]+` are safe. ### Optional -- `address` (string) - The address of the build service to use. By default +- `address` (string) - The address of the build service to use. By default this is `https://atlas.hashicorp.com`. -- `base_dir` (string) - The base directory of the files to upload. This will +- `base_dir` (string) - The base directory of the files to upload. This will be the current working directory when the build service executes your template. This path is relative to the template. -- `include` (array of strings) - Glob patterns to include relative to the +- `include` (array of strings) - Glob patterns to include relative to the `base_dir`. If this is specified, only files that match the include pattern are included. -- `exclude` (array of strings) - Glob patterns to exclude relative to the +- `exclude` (array of strings) - Glob patterns to exclude relative to the `base_dir`. -- `token` (string) - An access token to use to authenticate to the +- `token` (string) - An access token to use to authenticate to the build service. -- `vcs` (boolean) - If true, Packer will detect your VCS (if there is one) and +- `vcs` (boolean) - If true, Packer will detect your VCS (if there is one) and only upload the files that are tracked by the VCS. This is useful for automatically excluding ignored files. This defaults to false. @@ -69,7 +69,7 @@ each category, the available configuration keys are alphabetized. A push configuration section with minimal options: -```json +``` json { "push": { "name": "hashicorp/precise64" @@ -80,7 +80,7 @@ A push configuration section with minimal options: A push configuration specifying Packer to inspect the VCS and list individual files to include: -```json +``` json { "push": { "name": "hashicorp/precise64", diff --git a/website/source/docs/templates/user-variables.html.md b/website/source/docs/templates/user-variables.html.md index 88e562edd..7a7efe92b 100644 --- a/website/source/docs/templates/user-variables.html.md +++ b/website/source/docs/templates/user-variables.html.md @@ -1,13 +1,13 @@ --- +description: | + User variables allow your templates to be further configured with variables + from the command-line, environment variables, or files. This lets you + parameterize your templates so that you can keep secret tokens, + environment-specific data, and other types of information out of your + templates. This maximizes the portability and shareability of the template. layout: docs -sidebar_current: docs-templates-user-variables -page_title: User Variables - Templates -description: |- - User variables allow your templates to be further configured with variables - from the command-line, environment variables, or files. This lets you - parameterize your templates so that you can keep secret tokens, - environment-specific data, and other types of information out of your - templates. This maximizes the portability and shareability of the template. +page_title: 'User Variables - Templates' +sidebar_current: 'docs-templates-user-variables' --- # Template User Variables @@ -34,7 +34,7 @@ The `variables` section is a key/value mapping of the user variable name to a default value. A default value can be the empty string. An example is shown below: -```json +``` json { "variables": { "aws_access_key": "", @@ -72,7 +72,7 @@ The `env` function is available *only* within the default value of a user variable, allowing you to default a user variable to an environment variable. An example is shown below: -```json +``` json { "variables": { "my_secret": "{{env `MY_SECRET`}}", @@ -83,7 +83,7 @@ An example is shown below: This will default "my\_secret" to be the value of the "MY\_SECRET" environment variable (or an empty string if it does not exist). --> **Why can't I use environment variables elsewhere?** User variables are +-> **Why can't I use environment variables elsewhere?** User variables are the single source of configurable input to a template. We felt that having environment variables used *anywhere* in a template would confuse the user about the possible inputs to a template. By allowing environment variables @@ -91,7 +91,7 @@ only within default values for user variables, user variables remain as the single source of input to a template that a user can easily discover using `packer inspect`. --> **Why can't I use `~` for home variable?** `~` is an special variable +-> **Why can't I use `~` for home variable?** `~` is an special variable that is evaluated by shell during a variable expansion. As Packer doesn't run inside a shell, it won't expand `~`. @@ -110,7 +110,7 @@ example above, we could build our template using the command below. The command is split across multiple lines for readability, but can of course be a single line. -```text +``` text $ packer build \ -var 'aws_access_key=foo' \ -var 'aws_secret_key=bar' \ @@ -127,7 +127,7 @@ Variables can also be set from an external JSON file. The `-var-file` flag reads a file containing a key/value mapping of variables to values and sets those variables. An example JSON file may look like this: -```json +``` json { "aws_access_key": "foo", "aws_secret_key": "bar" @@ -138,7 +138,7 @@ It is a single JSON object where the keys are variables and the values are the variable values. Assuming this file is in `variables.json`, we can build our template using the following command: -```text +``` text $ packer build -var-file=variables.json template.json ``` @@ -151,7 +151,7 @@ expect. Variables set later in the command override variables set earlier. So, for example, in the following command with the above `variables.json` file: -```text +``` text $ packer build \ -var 'aws_access_key=bar' \ -var-file=variables.json \ @@ -161,10 +161,10 @@ $ packer build \ Results in the following variables: -| Variable | Value | -| -------- | --------- | -| aws_access_key | foo | -| aws_secret_key | baz | +| Variable | Value | +|------------------|-------| +| aws\_access\_key | foo | +| aws\_secret\_key | baz | # Recipes @@ -176,7 +176,7 @@ be able to do this by referencing the variable within a command that you execute. For example, here is how to make a `shell-local` provisioner only run if the `do_nexpose_scan` variable is non-empty. -```json +``` json { "type": "shell-local", "command": "if [ ! -z \"{{user `do_nexpose_scan`}}\" ]; then python -u trigger_nexpose_scan.py; fi" @@ -187,7 +187,7 @@ provisioner only run if the `do_nexpose_scan` variable is non-empty. In order to use `$HOME` variable, you can create a `home` variable in Packer: -```json +``` json { "variables": { "home": "{{env `HOME`}}" @@ -197,7 +197,7 @@ In order to use `$HOME` variable, you can create a `home` variable in Packer: And this will be available to be used in the rest of the template, i.e.: -```json +``` json { "builders": [ {