Merge pull request #9384 from hashicorp/website/getting-started/migration

Refactored to learn hosted gettings started track.
This commit is contained in:
Megan Marsh 2020-06-11 13:36:35 -07:00 committed by GitHub
commit 1eb868cef5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
11 changed files with 51 additions and 1171 deletions

View File

@ -24,8 +24,18 @@
/docs/command-line/* /docs/commands/:splat 200
/docs/extend/* /docs/extending/:splat 200
/intro/getting-started/install /intro/getting-started 301!
/intro/getting-started/install.html /intro/getting-started 301!
/intro/getting-started/install https://learn.hashicorp.com/packer/getting-started/install 301!
/intro/getting-started/install.html https://learn.hashicorp.com/packer/getting-started/install 301!
/intro/getting-started/build-image https://learn.hashicorp.com/packer/getting-started/build-image 301!
/intro/getting-started/build-image.html https://learn.hashicorp.com/packer/getting-started/build-image 301!
/intro/getting-started/provision https://learn.hashicorp.com/packer/getting-started/provision 301!
/intro/getting-started/provision.html https://learn.hashicorp.com/packer/getting-started/provision 301!
/intro/getting-started/parallel-builds https://learn.hashicorp.com/packer/getting-started/parallel-builds 301!
/intro/getting-started/parallel-builds.html https://learn.hashicorp.com/packer/getting-started/parallel-builds 301!
/intro/getting-started/vagrant https://learn.hashicorp.com/packer/getting-started/vagrant 301!
/intro/getting-started/vagrant.html https://learn.hashicorp.com/packer/getting-started/vagrant 301!
/intro/getting-started/next https://learn.hashicorp.com/packer/getting-started/next 301!
/intro/getting-started/next.html https://learn.hashicorp.com/packer/getting-started/next 301!
/docs/basics/terminology /docs/terminology 301!
/docs/basics/terminology.html /docs/terminology 301!

View File

@ -10,6 +10,32 @@ export default [
'use-cases',
{
category: 'getting-started',
content: ['build-image', 'provision', 'parallel-builds', 'vagrant', 'next']
}
name: 'Getting Started',
content: [
{
title: 'Install',
href: '/intro/getting-started/install',
},
{
title: 'Build An Image',
href: '/intro/getting-started/build-image',
},
{
title: 'Provision',
href: '/intro/getting-started/provision',
},
{
title: 'Parallel Builds',
href: '/intro/getting-started/parallel-builds',
},
{
title: 'Vagrant Boxes',
href: '/intro/getting-started/vagrant',
},
{
title: 'Next Steps',
href: '/intro/getting-started/next',
},
],
},
]

View File

@ -10,4 +10,6 @@ sidebar_title: Installing Packer
# Install Packer
For detailed instructions on how to install Packer, see [this
page](/intro/getting-started/install) in our getting-started guide.
Getting Started guide][install].
[install]: https://learn.hashicorp.com/packer/getting-started/install 'Install Packer'

View File

@ -11,7 +11,7 @@ export default function Homepage() {
<Button
title="Get Started"
theme={{ brand: 'packer' }}
url="/intro"
url="https://learn.hashicorp.com/packer"
/>
<Button
title={`Download ${VERSION}`}
@ -91,12 +91,12 @@ export default function Homepage() {
<div className="tag g-type-label">Infrastructure as Code</div>
<h2 className="g-type-display-2">Modern, Automated</h2>
<p className="g-type-body">
HashiCorp Packer is easy to use and automates the creation of any
type of machine image. It embraces modern configuration management
by encouraging you to use automated scripts to install and configure
the software within your Packer-made images. Packer brings machine
images into the modern age, unlocking untapped potential and opening
new opportunities.
HashiCorp Packer automates the creation of any type of machine
image. It embraces modern configuration management by encouraging
you to use automated scripts to install and configure the software
within your Packer-made images. Packer brings machine images into
the modern age, unlocking untapped potential and opening new
opportunities.
</p>
</div>
</section>

View File

@ -1,636 +0,0 @@
---
layout: intro
page_title: Build an Image - Getting Started
sidebar_title: Build an Image
description: |-
With Packer installed, let's just dive right into it and build our first
image. Our first image will be an Amazon EC2 AMI with Redis pre-installed.
This is just an example. Packer can create images for many platforms with
anything pre-installed.
---
# Build an Image
With Packer installed, let's just dive right into it and build our first image.
Our first image will be an [Amazon EC2 AMI](https://aws.amazon.com/ec2/).
This is just an example. Packer can create images for [many platforms][platforms].
If you don't have an AWS account, [create one now](https://aws.amazon.com/free/).
For the example, we'll use a "t2.micro" instance to build our image, which
qualifies under the AWS [free-tier](https://aws.amazon.com/free/), meaning it
will be free. If you already have an AWS account, you may be charged some amount
of money, but it shouldn't be more than a few cents.
-> **Note:** If you're not using an account that qualifies under the AWS
free-tier, you may be charged to run these examples. The charge should only be a
few cents, but we're not responsible if it ends up being more.
Packer can build images for [many platforms][platforms] other than
AWS, but AWS requires no additional software installed on your computer and
their [free-tier](https://aws.amazon.com/free/) makes it free to use for most
people. This is why we chose to use AWS for the example. If you're uncomfortable
setting up an AWS account, feel free to follow along as the basic principles
apply to the other platforms as well.
## The Template
The configuration file used to define what image we want built and how is called
a _template_ in Packer terminology. The format of a template is simple
[JSON](http://www.json.org/). JSON struck the best balance between
human-editable and machine-editable, allowing both hand-made templates as well
as machine generated templates to easily be made.
We'll start by creating the entire template, then we'll go over each section
briefly. Create a file `example.json` and fill it with the following contents:
```json
{
"variables": {
"aws_access_key": "",
"aws_secret_key": ""
},
"builders": [
{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "us-east-1",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "ubuntu/images/*ubuntu-xenial-16.04-amd64-server-*",
"root-device-type": "ebs"
},
"owners": ["099720109477"],
"most_recent": true
},
"instance_type": "t2.micro",
"ssh_username": "ubuntu",
"ami_name": "packer-example {{timestamp}}"
}
]
}
```
When building, you'll pass in `aws_access_key` and `aws_secret_key` as
[user variables](/docs/templates/user-variables), keeping your secret keys
out of the template. You can create security credentials on [this
page](https://console.aws.amazon.com/iam/home?#security_credential). An example
IAM policy document can be found in the [Amazon EC2 builder
docs](/docs/builders/amazon).
This is a basic template that is ready-to-go. It should be immediately
recognizable as a normal, basic JSON object. Within the object, the `builders`
section contains an array of JSON objects configuring a specific _builder_. A
builder is a component of Packer that is responsible for creating a machine and
turning that machine into an image.
In this case, we're only configuring a single builder of type `amazon-ebs`. This
is the Amazon EC2 AMI builder that ships with Packer. This builder builds an
EBS-backed AMI by launching a source AMI, provisioning on top of that, and
re-packaging it into a new AMI.
The additional keys within the object are configuration for this builder,
specifying things such as access keys, the source AMI to build from and more.
The exact set of configuration variables available for a builder are specific to
each builder and can be found within the [documentation](/docs).
Before we take this template and build an image from it, let's validate the
template by running `packer validate example.json`. This command checks the
syntax as well as the configuration values to verify they look valid. The output
should look similar to below, because the template should be valid. If there are
any errors, this command will tell you.
```shell-session
$ packer validate example.json
Template validated successfully.
```
Next, let's build the image from this template.
## Your First Image
With a properly validated template, it is time to build your first image. This
is done by calling `packer build` with the template file. The output should look
similar to below. Note that this process typically takes a few minutes.
-> **Note:** For the tutorial it is convenient to use the credentials in the
command line. However, it is potentially insecure. See our documentation for
other ways to [specify Amazon credentials](/docs/builders/amazon#specifying-amazon-credentials).
-> **Note:** When using packer on Windows, replace the single-quotes in the
command below with double-quotes.
```shell-session
$ packer build \
-var 'aws_access_key=YOUR ACCESS KEY' \
-var 'aws_secret_key=YOUR SECRET KEY' \
example.json
==> amazon-ebs: amazon-ebs output will be in this color.
==> amazon-ebs: Creating temporary keypair for this instance...
==> amazon-ebs: Creating temporary security group for this instance...
==> amazon-ebs: Authorizing SSH access on the temporary security group...
==> amazon-ebs: Launching a source AWS instance...
==> amazon-ebs: Waiting for instance to become ready...
==> amazon-ebs: Connecting to the instance via SSH...
==> amazon-ebs: Stopping the source instance...
==> amazon-ebs: Waiting for the instance to stop...
==> amazon-ebs: Creating the AMI: packer-example 1371856345
==> amazon-ebs: AMI: ami-19601070
==> amazon-ebs: Waiting for AMI to become ready...
==> amazon-ebs: Terminating the source AWS instance...
==> amazon-ebs: Deleting temporary security group...
==> amazon-ebs: Deleting temporary keypair...
==> amazon-ebs: Build finished.
==> Builds finished. The artifacts of successful builds are:
--> amazon-ebs: AMIs were created:
us-east-1: ami-19601070
```
At the end of running `packer build`, Packer outputs the _artifacts_ that were
created as part of the build. Artifacts are the results of a build, and
typically represent an ID (such as in the case of an AMI) or a set of files
(such as for a VMware virtual machine). In this example, we only have a single
artifact: the AMI in us-east-1 that was created.
This AMI is ready to use. If you wanted you could go and launch this AMI right
now and it would work great.
-> **Note:** Your AMI ID will surely be different than the one above. If you
try to launch the one in the example output above, you will get an error. If you
want to try to launch your AMI, get the ID from the Packer output.
-> **Note:** If you see a `VPCResourceNotSpecified` error, Packer might not be
able to determine the default VPC, which the `t2` instance types require. This
can happen if you created your AWS account before `2013-12-04`. You can either
change the `instance_type` to `m3.medium`, or specify a VPC. Please see
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/default-vpc.html for more
information. If you specify a `vpc_id`, you will also need to set `subnet_id`.
Unless you modify your subnet's [IPv4 public addressing attribute](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-ip-addressing.html#subnet-public-ip),
you will also need to set `associate_public_ip_address` to `true`, or set up a
[VPN](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html).
## Managing the Image
Packer only builds images. It does not attempt to manage them in any way. After
they're built, it is up to you to launch or destroy them as you see fit.
After running the above example, your AWS account now has an AMI associated
with it. AMIs are stored in S3 by Amazon, so unless you want to be charged
about `$0.01` per month, you'll probably want to remove it. Remove the AMI by
first deregistering it on the [AWS AMI management
page](https://console.aws.amazon.com/ec2/home?region=us-east-1#s=Images). Next,
delete the associated snapshot on the [AWS snapshot management
page](https://console.aws.amazon.com/ec2/home?region=us-east-1#s=Snapshots).
Congratulations! You've just built your first image with Packer. Although the
image was pretty useless in this case (nothing was changed about it), this page
should've given you a general idea of how Packer works, what templates are and
how to validate and build templates into machine images.
## Some more examples:
### Another GNU/Linux Example, with provisioners:
Create a file named `welcome.txt` and add the following:
```text
WELCOME TO PACKER!
```
Create a file named `example.sh` and add the following:
```bash
#!/bin/bash
echo "hello"
```
Set your access key and id as environment variables, so we don't need to pass
them in through the command line:
```text
export AWS_ACCESS_KEY_ID=MYACCESSKEYID
export AWS_SECRET_ACCESS_KEY=MYSECRETACCESSKEY
```
Now save the following text in a file named `firstrun.json`:
```json
{
"variables": {
"aws_access_key": "{{env `AWS_ACCESS_KEY_ID`}}",
"aws_secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}",
"region": "us-east-1"
},
"builders": [
{
"access_key": "{{user `aws_access_key`}}",
"ami_name": "packer-linux-aws-demo-{{timestamp}}",
"instance_type": "t2.micro",
"region": "{{user `region`}}",
"secret_key": "{{user `aws_secret_key`}}",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "ubuntu/images/*ubuntu-xenial-16.04-amd64-server-*",
"root-device-type": "ebs"
},
"owners": ["099720109477"],
"most_recent": true
},
"ssh_username": "ubuntu",
"type": "amazon-ebs"
}
],
"provisioners": [
{
"type": "file",
"source": "./welcome.txt",
"destination": "/home/ubuntu/"
},
{
"type": "shell",
"inline": ["ls -al /home/ubuntu", "cat /home/ubuntu/welcome.txt"]
},
{
"type": "shell",
"script": "./example.sh"
}
]
}
```
and to build, run `packer build firstrun.json`
Note that if you wanted to use a `source_ami` instead of a `source_ami_filter`
it might look something like this: `"source_ami": "ami-fce3c696"`.
Your output will look like this:
```text
amazon-ebs output will be in this color.
==> amazon-ebs: Prevalidating AMI Name: packer-linux-aws-demo-1507231105
amazon-ebs: Found Image ID: ami-fce3c696
==> amazon-ebs: Creating temporary keypair: packer_59d68581-e3e6-eb35-4ae3-c98d55cfa04f
==> amazon-ebs: Creating temporary security group for this instance: packer_59d68584-cf8a-d0af-ad82-e058593945ea
==> amazon-ebs: Authorizing access to port 22 on the temporary security group...
==> amazon-ebs: Launching a source AWS instance...
==> amazon-ebs: Adding tags to source instance
amazon-ebs: Adding tag: "Name": "Packer Builder"
amazon-ebs: Instance ID: i-013e8fb2ced4d714c
==> amazon-ebs: Waiting for instance (i-013e8fb2ced4d714c) to become ready...
==> amazon-ebs: Waiting for SSH to become available...
==> amazon-ebs: Connected to SSH!
==> amazon-ebs: Uploading ./scripts/welcome.txt => /home/ubuntu/
==> amazon-ebs: Provisioning with shell script: /var/folders/8t/0yb5q0_x6mb2jldqq_vjn3lr0000gn/T/packer-shell661094204
amazon-ebs: total 32
amazon-ebs: drwxr-xr-x 4 ubuntu ubuntu 4096 Oct 5 19:19 .
amazon-ebs: drwxr-xr-x 3 root root 4096 Oct 5 19:19 ..
amazon-ebs: -rw-r--r-- 1 ubuntu ubuntu 220 Apr 9 2014 .bash_logout
amazon-ebs: -rw-r--r-- 1 ubuntu ubuntu 3637 Apr 9 2014 .bashrc
amazon-ebs: drwx------ 2 ubuntu ubuntu 4096 Oct 5 19:19 .cache
amazon-ebs: -rw-r--r-- 1 ubuntu ubuntu 675 Apr 9 2014 .profile
amazon-ebs: drwx------ 2 ubuntu ubuntu 4096 Oct 5 19:19 .ssh
amazon-ebs: -rw-r--r-- 1 ubuntu ubuntu 18 Oct 5 19:19 welcome.txt
amazon-ebs: WELCOME TO PACKER!
==> amazon-ebs: Provisioning with shell script: ./example.sh
amazon-ebs: hello
==> amazon-ebs: Stopping the source instance...
amazon-ebs: Stopping instance, attempt 1
==> amazon-ebs: Waiting for the instance to stop...
==> amazon-ebs: Creating the AMI: packer-linux-aws-demo-1507231105
amazon-ebs: AMI: ami-f76ea98d
==> amazon-ebs: Waiting for AMI to become ready...
```
### A Windows Example
As with the GNU/Linux example above, should you decide to follow along and
build an AMI from the example template, provided you qualify for free tier
usage, you should not be charged for actually building the AMI.
However, please note that you will be charged for storage of the snapshot
associated with any AMI that you create.
If you wish to avoid further charges, follow the steps in the [Managing the
Image](/intro/getting-started/build-image#managing-the-image) section
above to deregister the created AMI and delete the associated snapshot once
you're done.
Again, in this example, we are making use of an existing AMI available from
the Amazon marketplace as the _source_ or starting point for building our
own AMI. In brief, Packer will spin up the source AMI, connect to it and then
run whatever commands or scripts we've configured in our build template to
customize the image. Finally, when all is done, Packer will wrap the whole
customized package up into a brand new AMI that will be available from the
[AWS AMI management page](https://console.aws.amazon.com/ec2/home?region=us-east-1#s=Images). Any
instances we subsequently create from this AMI will have all of our
customizations baked in. This is the core benefit we are looking to
achieve from using the [Amazon EBS builder](/docs/builders/amazon-ebs)
in this example.
Now, all this sounds simple enough right? Well, actually it turns out we
need to put in just a _bit_ more effort to get things working as we'd like...
Here's the issue: Out of the box, the instance created from our source AMI
is not configured to allow Packer to connect to it. So how do we fix it so
that Packer can connect in and customize our instance?
Well, it turns out that Amazon provides a mechanism that allows us to run a
set of _pre-supplied_ commands within the instance shortly after the instance
starts. Even better, Packer is aware of this mechanism. This gives us the
ability to supply Packer with the commands required to configure the instance
for a remote connection _in advance_. Once the commands are run, Packer
will be able to connect directly in to the instance and make the
customizations we need.
Here's a basic example of a file that will configure the instance to allow
Packer to connect in over WinRM. As you will see, we will tell Packer about
our intentions by referencing this file and the commands within it from
within the `"builders"` section of our
[build template](/docs/templates) that we will create later.
Note the `<powershell>` and `</powershell>` tags at the top and bottom of
the file. These tags tell Amazon we'd like to run the enclosed code with
PowerShell. You can also use `<script></script>` tags to enclose any commands
that you would normally run in a Command Prompt window. See
[Running Commands on Your Windows Instance at Launch](http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-windows-user-data.html)
for more info about what's going on behind the scenes here.
```powershell
<powershell>
# Set administrator password
net user Administrator SuperS3cr3t!!!!
wmic useraccount where "name='Administrator'" set PasswordExpires=FALSE
# First, make sure WinRM can't be connected to
netsh advfirewall firewall set rule name="Windows Remote Management (HTTP-In)" new enable=yes action=block
# Delete any existing WinRM listeners
winrm delete winrm/config/listener?Address=*+Transport=HTTP 2>$Null
winrm delete winrm/config/listener?Address=*+Transport=HTTPS 2>$Null
# Disable group policies which block basic authentication and unencrypted login
Set-ItemProperty -Path HKLM:\Software\Policies\Microsoft\Windows\WinRM\Client -Name AllowBasic -Value 1
Set-ItemProperty -Path HKLM:\Software\Policies\Microsoft\Windows\WinRM\Client -Name AllowUnencryptedTraffic -Value 1
Set-ItemProperty -Path HKLM:\Software\Policies\Microsoft\Windows\WinRM\Service -Name AllowBasic -Value 1
Set-ItemProperty -Path HKLM:\Software\Policies\Microsoft\Windows\WinRM\Service -Name AllowUnencryptedTraffic -Value 1
# Create a new WinRM listener and configure
winrm create winrm/config/listener?Address=*+Transport=HTTP
winrm set winrm/config/winrs '@{MaxMemoryPerShellMB="0"}'
winrm set winrm/config '@{MaxTimeoutms="7200000"}'
winrm set winrm/config/service '@{AllowUnencrypted="true"}'
winrm set winrm/config/service '@{MaxConcurrentOperationsPerUser="12000"}'
winrm set winrm/config/service/auth '@{Basic="true"}'
winrm set winrm/config/client/auth '@{Basic="true"}'
# Configure UAC to allow privilege elevation in remote shells
$Key = 'HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System'
$Setting = 'LocalAccountTokenFilterPolicy'
Set-ItemProperty -Path $Key -Name $Setting -Value 1 -Force
# Configure and restart the WinRM Service; Enable the required firewall exception
Stop-Service -Name WinRM
Set-Service -Name WinRM -StartupType Automatic
netsh advfirewall firewall set rule name="Windows Remote Management (HTTP-In)" new action=allow localip=any remoteip=any
Start-Service -Name WinRM
</powershell>
```
-> **Warning:** Please note that if you're setting up WinRM for provisioning, you'll probably want to turn it off or restrict its permissions as part of a shutdown script at the end of Packer's provisioning process. For more details on the why/how, check out this useful blog post and the associated code:
https://cloudywindows.io/post/winrm-for-provisioning-close-the-door-on-the-way-out-eh/
Save the above code in a file named `bootstrap_win.txt`.
-> **A quick aside/warning:** Windows administrators in the know might be wondering why we haven't simply
used a `winrm quickconfig -q` command in the script above, as this would
_automatically_ set up all of the required elements necessary for connecting
over WinRM. Why all the extra effort to configure things manually?<br /><br />
Well, long and short, use of the `winrm quickconfig -q` command can sometimes
cause the Packer build to fail shortly after the WinRM connection is
established. How?<br /><br />1. Among other things, as well as setting up the listener for WinRM, the
quickconfig command also configures the firewall to allow management messages
to be sent over HTTP.<br />2. This undoes the previous command in the script that configured the
firewall to prevent this access.<br />3. The upshot is that the system is configured and ready to accept WinRM
connections earlier than intended.<br />4. If Packer establishes its WinRM connection immediately after execution of
the 'winrm quickconfig -q' command, the later commands within the script that
restart the WinRM service will unceremoniously pull the rug out from under
the connection.<br />5. While Packer does _a lot_ to ensure the stability of its connection in to
your instance, this sort of abuse can prove to be too much and _may_ cause
your Packer build to stall irrecoverably or fail!
Now we've got the business of getting Packer connected to our instance
taken care of, let's get on with the _real_ reason we're doing all this,
which is actually configuring and customizing the instance. Again, we do this
with [Provisioners](/docs/provisioners).
The example config below shows the two different ways of using the [PowerShell
provisioner](/docs/provisioners/powershell): `inline` and `script`.
The first example, `inline`, allows you to provide short snippets of code, and
will create the script file for you. The second example allows you to run more
complex code by providing the path to a script to run on the guest VM.
Here's an example of a `sample_script.ps1` that will work with the environment
variables we will set in our build template; copy the contents into your own
`sample_script.ps1` and provide the path to it in your build template:
```powershell
Write-Host "PACKER_BUILD_NAME is an env var Packer automatically sets for you."
Write-Host "...or you can set it in your builder variables."
Write-Host "The default for this builder is:" $Env:PACKER_BUILD_NAME
Write-Host "The PowerShell provisioner will automatically escape characters"
Write-Host "considered special to PowerShell when it encounters them in"
Write-Host "your environment variables or in the PowerShell elevated"
Write-Host "username/password fields."
Write-Host "For example, VAR1 from our config is:" $Env:VAR1
Write-Host "Likewise, VAR2 is:" $Env:VAR2
Write-Host "VAR3 is:" $Env:VAR3
Write-Host "Finally, VAR4 is:" $Env:VAR4
Write-Host "None of the special characters needed escaping in the template"
```
Finally, we need to create the actual [build template](/docs/templates).
Remember, this template is the core configuration file that Packer uses to
understand what you want to build, and how you want to build it.
As mentioned earlier, the specific builder we are using in this example
is the [Amazon EBS builder](/docs/builders/amazon-ebs).
The template below demonstrates use of the [`source_ami_filter`](/docs/builders/amazon-ebs#source_ami_filter) configuration option
available within the builder for automatically selecting the _latest_
suitable source Windows AMI provided by Amazon.
We also use the `user_data_file` configuration option provided by the builder
to reference the bootstrap file we created earlier. As you will recall, our
bootstrap file contained all the commands we needed to supply in advance of
actually spinning up the instance, so that later on, our instance is
configured to allow Packer to connect in to it.
The `"provisioners"` section of the template demonstrates use of the
[powershell](/docs/provisioners/powershell) and
[windows-restart](/docs/provisioners/windows-restart) provisioners to
customize and control the build process:
```json
{
"variables": {
"aws_access_key": "{{env `AWS_ACCESS_KEY_ID`}}",
"aws_secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}",
"region": "us-east-1"
},
"builders": [
{
"type": "amazon-ebs",
"access_key": "{{ user `aws_access_key` }}",
"secret_key": "{{ user `aws_secret_key` }}",
"region": "{{ user `region` }}",
"instance_type": "t2.micro",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "*Windows_Server-2012-R2*English-64Bit-Base*",
"root-device-type": "ebs"
},
"most_recent": true,
"owners": "amazon"
},
"ami_name": "packer-demo-{{timestamp}}",
"user_data_file": "./bootstrap_win.txt",
"communicator": "winrm",
"winrm_username": "Administrator",
"winrm_password": "SuperS3cr3t!!!!"
}
],
"provisioners": [
{
"type": "powershell",
"environment_vars": ["DEVOPS_LIFE_IMPROVER=PACKER"],
"inline": [
"Write-Host \"HELLO NEW USER; WELCOME TO $Env:DEVOPS_LIFE_IMPROVER\"",
"Write-Host \"You need to use backtick escapes when using\"",
"Write-Host \"characters such as DOLLAR`$ directly in a command\"",
"Write-Host \"or in your own scripts.\""
]
},
{
"type": "windows-restart"
},
{
"script": "./sample_script.ps1",
"type": "powershell",
"environment_vars": [
"VAR1=A$Dollar",
"VAR2=A`Backtick",
"VAR3=A'SingleQuote",
"VAR4=A\"DoubleQuote"
]
}
]
}
```
Save the build template as `firstrun.json`.
Next we need to set things up so that Packer is able to access and use our
AWS account. Set your access key and id as environment variables, so we
don't need to pass them in through the command line:
```shell
export AWS_ACCESS_KEY_ID=MYACCESSKEYID
export AWS_SECRET_ACCESS_KEY=MYSECRETACCESSKEY
```
Finally, we can create our new AMI by running `packer build firstrun.json`
You should see output like this:
```text
amazon-ebs output will be in this color.
==> amazon-ebs: Prevalidating AMI Name: packer-demo-1518111383
amazon-ebs: Found Image ID: ami-013e197b
==> amazon-ebs: Creating temporary keypair: packer_5a7c8a97-f27f-6708-cc3c-6ab9b4688b13
==> amazon-ebs: Creating temporary security group for this instance: packer_5a7c8ab5-444c-13f2-0aa1-18d124cdb975
==> amazon-ebs: Authorizing access to port 5985 from 0.0.0.0/0 in the temporary security group...
==> amazon-ebs: Launching a source AWS instance...
==> amazon-ebs: Adding tags to source instance
amazon-ebs: Adding tag: "Name": "Packer Builder"
amazon-ebs: Instance ID: i-0c8c808a3b945782a
==> amazon-ebs: Waiting for instance (i-0c8c808a3b945782a) to become ready...
==> amazon-ebs: Skipping waiting for password since WinRM password set...
==> amazon-ebs: Waiting for WinRM to become available...
amazon-ebs: WinRM connected.
==> amazon-ebs: Connected to WinRM!
==> amazon-ebs: Provisioning with Powershell...
==> amazon-ebs: Provisioning with powershell script: /var/folders/15/d0f7gdg13rnd1cxp7tgmr55c0000gn/T/packer-powershell-provisioner943573503
amazon-ebs: HELLO NEW USER; WELCOME TO PACKER
amazon-ebs: You need to use backtick escapes when using
amazon-ebs: characters such as DOLLAR$ directly in a command
amazon-ebs: or in your own scripts.
==> amazon-ebs: Restarting Machine
==> amazon-ebs: Waiting for machine to restart...
amazon-ebs: WIN-NI8N45RPJ23 restarted.
==> amazon-ebs: Machine successfully restarted, moving on
==> amazon-ebs: Provisioning with Powershell...
==> amazon-ebs: Provisioning with powershell script: ./sample_script.ps1
amazon-ebs: PACKER_BUILD_NAME is an env var Packer automatically sets for you.
amazon-ebs: ...or you can set it in your builder variables.
amazon-ebs: The default for this builder is: amazon-ebs
amazon-ebs: The PowerShell provisioner will automatically escape characters
amazon-ebs: considered special to PowerShell when it encounters them in
amazon-ebs: your environment variables or in the PowerShell elevated
amazon-ebs: username/password fields.
amazon-ebs: For example, VAR1 from our config is: A$Dollar
amazon-ebs: Likewise, VAR2 is: A`Backtick
amazon-ebs: VAR3 is: A'SingleQuote
amazon-ebs: Finally, VAR4 is: A"DoubleQuote
amazon-ebs: None of the special characters needed escaping in the template
==> amazon-ebs: Stopping the source instance...
amazon-ebs: Stopping instance, attempt 1
==> amazon-ebs: Waiting for the instance to stop...
==> amazon-ebs: Creating the AMI: packer-demo-1518111383
amazon-ebs: AMI: ami-f0060c8a
==> amazon-ebs: Waiting for AMI to become ready...
==> amazon-ebs: Terminating the source AWS instance...
==> amazon-ebs: Cleaning up any extra volumes...
==> amazon-ebs: No volumes to clean up, skipping
==> amazon-ebs: Deleting temporary security group...
==> amazon-ebs: Deleting temporary keypair...
Build 'amazon-ebs' finished.
==> Builds finished. The artifacts of successful builds are:
--> amazon-ebs: AMIs were created:
us-east-1: ami-f0060c8a
```
And if you navigate to your EC2 dashboard you should see your shiny new AMI
listed in the main window of the Images -> AMIs section.
Why stop there though?
As you'll see, with one simple change to the template above, it's
just as easy to create your own Windows 2008 or Windows 2016 AMIs. Just
set the value for the name field within `source_ami_filter` as required:
For Windows 2008 SP2:
```text
"name": "*Windows_Server-2008-SP2*English-64Bit-Base*",
```
For Windows 2016:
```text
"name": "*Windows_Server-2016-English-Full-Base*",
```
The bootstrapping and sample provisioning should work the same across all
Windows server versions.
[Continue to provisioning an image &raquo;](/intro/getting-started/provision)
[platforms]: /docs/builders

View File

@ -1,128 +0,0 @@
---
layout: intro
page_title: Install Packer - Getting Started
sidebar_title: Getting Started
description: >-
Packer must first be installed on the machine you want to run it on. To make
installation easier, Packer is distributed as a binary package for all
supported platforms and architectures. This page will not cover how to compile
Packer from source, as that is covered in the README and is only recommended
for advanced users.
---
# Install Options
Packer may be installed in the following ways:
1. Using a [precompiled binary](#precompiled-binaries); We release binaries
for all supported platforms and architectures. This method is recommended for
most users.
2. Installing [from source](#compiling-from-source) This method is only
recommended for advanced users.
3. An unoffical [alternative installation method](#alternative-installation-methods)
## Precompiled Binaries
To install the precompiled binary, [download](/downloads) the appropriate
package for your system. Packer is currently packaged as a zip file. We do not
have any near term plans to provide system packages.
Next, unzip the downloaded package into a directory where Packer will be
installed. On Unix systems, `~/packer` or `/usr/local/packer` is generally good,
depending on whether you want to restrict the install to just your user or
install it system-wide. If you intend to access Packer from the command-line, make
sure to place it somewhere on your `PATH`. On Windows systems, you can install
the binary wherever you'd like. The single `packer` (or `packer.exe` for
Windows) binary contains all that is necessary to run Packer.
After unzipping the package, the directory should contain a single binary
program called `packer`. The final step to
installation is to make sure the directory you installed Packer to is on the
PATH. See [this
page](https://stackoverflow.com/questions/14637979/how-to-permanently-set-path-on-linux)
for instructions on setting the PATH on Linux and Mac. [This
page](https://stackoverflow.com/questions/1618280/where-can-i-set-path-to-make-exe-on-windows)
contains instructions for setting the PATH on Windows.
## Compiling from Source
To compile from source, you will need [Go](https://golang.org) installed and
configured properly as well as a copy of [`git`](https://www.git-scm.com/)
in your `PATH`.
1. Clone the Packer repository from GitHub into your `GOPATH`:
```shell-session
$ mkdir -p $(go env GOPATH)/src/github.com/hashicorp && cd $_
$ git clone https://github.com/hashicorp/packer.git
$ cd packer
```
2. Build Packer for your current system and put the
binary in `./bin/` (relative to the git checkout). The `make dev` target is
just a shortcut that builds `packer` for only your local build environment (no
cross-compiled targets).
```shell-session
$ make dev
```
## Verifying the Installation
After installing Packer, verify the installation worked by opening a new command
prompt or console, and checking that `packer` is available:
```shell-session
$ packer
usage: packer [--version] [--help] <command> [<args>]
Available commands are:
build build image(s) from template
fix fixes templates from old versions of packer
inspect see components of a template
validate check that a template is valid
version Prints the Packer version
```
If you get an error that `packer` could not be found, then your PATH environment
variable was not setup properly. Please go back and ensure that your PATH
variable contains the directory which has Packer installed.
Otherwise, Packer is installed and you're ready to go!
## Alternative Installation Methods
While the binary packages are the only official method of installation, there are
alternatives available.
### Homebrew
If you're using OS X and [Homebrew](http://brew.sh), you can install Packer by
running:
$ brew install packer
### Chocolatey
If you're using Windows and [Chocolatey](http://chocolatey.org), you can
install Packer by running:
choco install packer
## Troubleshooting
On some _RedHat_-based Linux distributions there is another tool named `packer`
installed by default. You can check for this using `which -a packer`. If you get
an error like this it indicates there is a name conflict.
$ packer
/usr/share/cracklib/pw_dict.pwd: Permission denied
/usr/share/cracklib/pw_dict: Permission denied
To fix this, you can create a symlink to packer that uses a different name like
`packer.io`, or invoke the `packer` binary you want using its absolute path,
e.g. `/usr/local/packer`.
[Continue to building an image &raquo;](/intro/getting-started/build-image)

View File

@ -1,26 +0,0 @@
---
layout: intro
page_title: Next Steps - Getting Started
sidebar_title: Next Steps
description: |-
That concludes the getting started guide for Packer. You should now be
comfortable with basic Packer usage, should understand templates, defining
builds, provisioners, etc. At this point you're ready to begin playing with
and using Packer in real scenarios.
---
# Next Steps
That concludes the getting started guide for Packer. You should now be
comfortable with basic Packer usage, should understand templates, defining
builds, provisioners, etc. At this point you're ready to begin playing with and
using Packer in real scenarios.
From this point forward, the most important reference for you will be the
[documentation](/docs). The documentation is less of a guide and more of a
reference of all the overall features and options of Packer.
As you use Packer more, please voice your comments and concerns on the [mailing
list or IRC](/community). Additionally, Packer is [open
source](https://github.com/hashicorp/packer) so please contribute if you'd like
to. Contributions are very welcome.

View File

@ -1,181 +0,0 @@
---
layout: intro
page_title: Parallel Builds - Getting Started
sidebar_title: Parallel Builds
description: |-
So far we've shown how Packer can automatically build an image and provision
it. This on its own is already quite powerful. But Packer can do better than
that. Packer can create multiple images for multiple platforms in parallel,
all configured from a single template.
---
# Parallel Builds
So far we've shown how Packer can automatically build an image and provision it.
This on its own is already quite powerful. But Packer can do better than that.
Packer can create multiple images for multiple platforms _in parallel_, all
configured from a single template.
This is a very useful and important feature of Packer. As an example, Packer is
able to make an AMI and a VMware virtual machine in parallel provisioned with
the _same scripts_, resulting in near-identical images. The AMI can be used for
production, the VMware machine can be used for development. Or, another example,
if you're using Packer to build [software
appliances](https://en.wikipedia.org/wiki/Software_appliance), then you can build
the appliance for every supported platform all in parallel, all configured from
a single template.
Once you start taking advantage of this feature, the possibilities begin to
unfold in front of you.
Continuing on the example in this getting started guide, we'll build a
[DigitalOcean](http://www.digitalocean.com) image as well as an AMI. Both will
be near-identical: bare bones Ubuntu OS with Redis pre-installed. However, since
we're building for both platforms, you have the option of whether you want to
use the AMI, or the DigitalOcean snapshot. Or use both.
## Setting Up DigitalOcean
[DigitalOcean](https://www.digitalocean.com/) is a relatively new, but very
popular VPS provider that has popped up. They have a quality offering of high
performance, low cost VPS servers. We'll be building a DigitalOcean snapshot for
this example.
In order to do this, you'll need an account with DigitalOcean. [Sign up for an
account now](https://www.digitalocean.com/). It is free to sign up. Because the
"droplets" (servers) are charged hourly, you _will_ be charged `$0.01` for every
image you create with Packer. If you're not okay with this, just follow along.
!> **Warning!** You _will_ be charged `$0.01` by DigitalOcean per image
created with Packer because of the time the "droplet" is running.
Once you sign up for an account, grab your API token from the [DigitalOcean API
access page](https://cloud.digitalocean.com/settings/applications). Save these
values somewhere; you'll need them in a second.
## Modifying the Template
We now have to modify the template to add DigitalOcean to it. Modify the
template we've been using and add the following JSON object to the `builders`
array.
```json
{
"type": "digitalocean",
"api_token": "{{user `do_api_token`}}",
"image": "ubuntu-14-04-x64",
"region": "nyc3",
"size": "512mb",
"ssh_username": "root"
}
```
You'll also need to modify the `variables` section of the template to include
the access keys for DigitalOcean.
```json
"variables": {
"do_api_token": "",
// ...
}
```
The entire template should now look like this:
```json
{
"variables": {
"aws_access_key": "",
"aws_secret_key": "",
"do_api_token": ""
},
"builders": [
{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "us-east-1",
"source_ami": "ami-fce3c696",
"instance_type": "t2.micro",
"ssh_username": "ubuntu",
"ami_name": "packer-example {{timestamp}}"
},
{
"type": "digitalocean",
"api_token": "{{user `do_api_token`}}",
"image": "ubuntu-14-04-x64",
"region": "nyc3",
"size": "512mb",
"ssh_username": "root"
}
],
"provisioners": [
{
"type": "shell",
"inline": [
"sleep 30",
"sudo apt-get update",
"sudo apt-get install -y redis-server"
]
}
]
}
```
Additional builders are simply added to the `builders` array in the template.
This tells Packer to build multiple images. The builder `type` values don't even
need to be different! In fact, if you wanted to build multiple AMIs, you can do
that as long as you specify a unique `name` for each build.
Validate the template with `packer validate`. This is always a good practice.
-> **Note:** If you're looking for more **DigitalOcean configuration
options**, you can find them on the [DigitalOcean Builder
page](/docs/builders/digitalocean) in the documentation. The documentation
is more of a reference manual that contains a listing of all the available
configuration options.
## Build
Now run `packer build` with your user variables. The output is too verbose to
include all of it, but a portion of it is reproduced below. Note that the
ordering and wording of the lines may be slightly different, but the effect is
the same.
```shell-session
$ packer build \
-var 'aws_access_key=YOUR ACCESS KEY' \
-var 'aws_secret_key=YOUR SECRET KEY' \
-var 'do_api_token=YOUR API TOKEN' \
example.json
==> amazon-ebs: amazon-ebs output will be in this color.
==> digitalocean: digitalocean output will be in this color.
==> digitalocean: Creating temporary ssh key for droplet...
==> amazon-ebs: Creating temporary keypair for this instance...
==> amazon-ebs: Creating temporary security group for this instance...
==> digitalocean: Creating droplet...
==> amazon-ebs: Authorizing SSH access on the temporary security group...
==> amazon-ebs: Launching a source AWS instance...
==> digitalocean: Waiting for droplet to become active...
==> amazon-ebs: Waiting for instance to become ready...
==> digitalocean: Connecting to the droplet via SSH...
==> amazon-ebs: Connecting to the instance via SSH...
...
==> Builds finished. The artifacts of successful builds are:
--> amazon-ebs: AMIs were created:
us-east-1: ami-376d1d5e
--> digitalocean: A snapshot was created: packer-1371870364
```
As you can see, Packer builds both the Amazon and DigitalOcean images in
parallel. It outputs information about each in different colors (although you
can't see that in the block above), making it easier to identify the actions
executed when you execute the command.
At the end of the build, Packer outputs both of the artifacts created (an AMI
and a DigitalOcean snapshot). Both images created are bare bones Ubuntu
installations with Redis pre-installed.
[Continue to Vagrant boxes &raquo;](/intro/getting-started/vagrant)

View File

@ -1,107 +0,0 @@
---
layout: intro
page_title: Provision - Getting Started
sidebar_title: Provision
description: |-
In the previous page of this guide, you created your first image with Packer.
The image you just built, however, was basically just a repackaging of a
previously existing base AMI. The real utility of Packer comes from being able
to install and configure software into the images as well. This stage is also
known as the *provision* step. Packer fully supports automated provisioning in
order to install software onto the machines prior to turning them into images.
---
# Provision
In the previous page of this guide, you created your first image with Packer.
The image you just built, however, was basically just a repackaging of a
previously existing base AMI. The real utility of Packer comes from being able
to install and configure software into the images as well. This stage is also
known as the _provision_ step. Packer fully supports automated provisioning in
order to install software onto the machines prior to turning them into images.
In this section, we're going to complete our image by installing Redis on it.
This way, the image we end up building actually contains Redis pre-installed.
Although Redis is a small, simple example, this should give you an idea of what
it may be like to install many more packages into the image.
Historically, pre-baked images have been frowned upon because changing them has
been so tedious and slow. Because Packer is completely automated, including
provisioning, images can be changed quickly and integrated with modern
configuration management tools such as Chef or Puppet.
## Configuring Provisioners
Provisioners are configured as part of the template. We'll use the built-in
shell provisioner that comes with Packer to install Redis. Modify the
`example.json` template we made previously and add the following. We'll explain
the various parts of the new configuration following the code block below.
```json
{
"variables": ["..."],
"builders": ["..."],
"provisioners": [
{
"type": "shell",
"inline": [
"sleep 30",
"sudo apt-get update",
"sudo apt-get install -y redis-server"
]
}
]
}
```
-> **Note:** The `sleep 30` in the example above is very important. Because
Packer is able to detect and SSH into the instance as soon as SSH is available,
Ubuntu actually doesn't get proper amounts of time to initialize. The sleep
makes sure that the OS properly initializes.
Hopefully it is obvious, but the `builders` section shouldn't actually contain
"...", it should be the contents setup in the previous page of the getting
started guide. Also note the comma after the `"builders": [...]` section, which
was not present in the previous lesson.
To configure the provisioners, we add a new section `provisioners` to the
template, alongside the `builders` configuration. The provisioners section is an
array of provisioners to run. If multiple provisioners are specified, they are
run in the order given.
By default, each provisioner is run for every builder defined. So if we had two
builders defined in our template, such as both Amazon and DigitalOcean, then the
shell script would run as part of both builds. There are ways to restrict
provisioners to certain builds, but it is outside the scope of this getting
started guide. It is covered in more detail in the complete
[documentation](/docs).
The one provisioner we defined has a type of `shell`. This provisioner ships
with Packer and runs shell scripts on the running machine. In our case, we
specify two inline commands to run in order to install Redis.
## Build
With the provisioner configured, give it a pass once again through
`packer validate` to verify everything is okay, then build it using
`packer build example.json`. The output should look similar to when you built
your first image, except this time there will be a new step where the
provisioning is run.
The output from the provisioner is too verbose to include in this guide, since
it contains all the output from the shell scripts. But you should see Redis
successfully install. After that, Packer once again turns the machine into an
AMI.
If you were to launch this AMI, Redis would be pre-installed. Cool!
This is just a basic example. In a real world use case, you may be provisioning
an image with the entire stack necessary to run your application. Or maybe just
the web stack so that you can have an image for web servers pre-built. This
saves tons of time later as you launch these images since everything is
pre-installed. Additionally, since everything is pre-installed, you can test the
images as they're built and know that when they go into production, they'll be
functional.
[Continue to parallel builds &raquo;](/intro/getting-started/parallel-builds)

View File

@ -1,80 +0,0 @@
---
layout: intro
page_title: Vagrant Boxes - Getting Started
sidebar_title: Vagrant Boxes
description: |-
Packer also has the ability to take the results of a builder (such as an AMI
or plain VMware image) and turn it into a Vagrant box.
---
# Vagrant Boxes
Packer also has the ability to take the results of a builder (such as an AMI or
plain VMware image) and turn it into a [Vagrant](https://www.vagrantup.com) box.
This is done using [post-processors](/docs/templates/post-processors).
These take an artifact created by a previous builder or post-processor and
transforms it into a new one. In the case of the Vagrant post-processor, it
takes an artifact from a builder and transforms it into a Vagrant box file.
Post-processors are a generally very useful concept. While the example on this
getting-started page will be creating Vagrant images, post-processors have many
interesting use cases. For example, you can write a post-processor to compress
artifacts, upload them, test them, etc.
Let's modify our template to use the Vagrant post-processor to turn our AWS AMI
into a Vagrant box usable with the [vagrant-aws
plugin](https://github.com/mitchellh/vagrant-aws). If you followed along in the
previous page and setup DigitalOcean, Packer can't currently make Vagrant boxes
for DigitalOcean, but will be able to soon.
## Enabling the Post-Processor
Post-processors are added in the `post-processors` section of a template, which
we haven't created yet. Modify your `example.json` template and add the section.
Your template should look like the following:
```json
{
"builders": ["..."],
"provisioners": ["..."],
"post-processors": ["vagrant"]
}
```
In this case, we're enabling a single post-processor named "vagrant". This
post-processor is built-in to Packer and will create Vagrant boxes. You can
always create [new post-processors](/docs/extending/custom-post-processors), however.
The details on configuring post-processors is covered in the
[post-processors](/docs/templates/post-processors) documentation.
Validate the configuration using `packer validate`.
## Using the Post-Processor
Just run a normal `packer build` and it will now use the post-processor. Since
Packer can't currently make a Vagrant box for DigitalOcean anyway, I recommend
passing the `-only=amazon-ebs` flag to `packer build` so it only builds the AMI.
The command should look like the following:
```shell-session
$ packer build -only=amazon-ebs example.json
```
As you watch the output, you'll notice at the end in the artifact listing that a
Vagrant box was made (by default at `packer_aws.box` in the current directory).
Success!
But where did the Amazon EBS builder artifact go? When using post-processors,
Vagrant removes intermediary artifacts since they're usually not wanted. Only
the final artifact is preserved. This behavior can be changed, of course.
Changing this behavior is covered [in the
documentation](/docs/templates/post-processors).
Typically when removing intermediary artifacts, the actual underlying files or
resources of the artifact are also removed. For example, when building a VMware
image, if you turn it into a Vagrant box, the files of the VMware image will be
deleted since they were compressed into the Vagrant box. With creating AWS
images, however, the AMI is kept around, since Vagrant needs it to function.
[Continue to Next Steps &raquo;](/intro/getting-started/next)

View File

@ -20,7 +20,7 @@ or they had too high of a learning curve. The result is that, prior to Packer,
creating machine images threatened the agility of operations teams, and
therefore aren't used, despite the massive benefits.
Packer changes all of this. Packer is easy to use and automates the creation of
Packer changes all of this. Packer automates the creation of
any type of machine image. It embraces modern configuration management by
encouraging you to use a framework such as Chef or Puppet to install and
configure the software within your Packer-made images.