Merge pull request #5314 from hashicorp/remotebuildsdocs

Remotebuildsdocs
This commit is contained in:
Matthew Hooker 2017-09-11 11:44:11 -07:00 committed by GitHub
commit bec4024b19
3 changed files with 0 additions and 119 deletions

View File

@ -20,10 +20,6 @@ From this point forward, the most important reference for you will be the
[documentation](/docs/index.html). The documentation is less of a guide and more of a
reference of all the overall features and options of Packer.
If you're interested in learning more about how Packer fits into the HashiCorp
ecosystem of tools, read our [Atlas getting started
overview](https://atlas.hashicorp.com/help/intro/getting-started).
As you use Packer more, please voice your comments and concerns on the [mailing
list or IRC](/community.html). Additionally, Packer is [open
source](https://github.com/hashicorp/packer) so please contribute if you'd like

View File

@ -1,112 +0,0 @@
---
layout: intro
sidebar_current: intro-getting-started-remote-builds
page_title: Remote Builds and Storage - Getting Started
description: |-
Up to this point in the guide, you have been running Packer on your local
machine to build and provision images on AWS and DigitalOcean. However, you
can use Atlas by HashiCorp to both run Packer builds remotely and store the
output of builds.
---
# Remote Builds and Storage
Up to this point in the guide, you have been running Packer on your local
machine to build and provision images on AWS and DigitalOcean. However, you can
use [Atlas by HashiCorp](https://atlas.hashicorp.com) to run Packer builds
remotely and store the output of builds.
## Why Build Remotely?
By building remotely, you can move access credentials off of developer machines,
release local machines from long-running Packer processes, and automatically
start Packer builds from trigger sources such as `vagrant push`, a version
control system, or CI tool.
## Run Packer Builds Remotely
To run Packer remotely, there are two changes that must be made to the Packer
template. The first is the addition of the `push`
[configuration](https://www.packer.io/docs/templates/push.html), which sends the
Packer template to Atlas so it can run Packer remotely. The second modification
is updating the variables section to read variables from the Atlas environment
rather than the local environment. Remove the `post-processors` section for now
if it is still in your template.
```json
{
"variables": {
"aws_access_key": "{{env `aws_access_key`}}",
"aws_secret_key": "{{env `aws_secret_key`}}"
},
"builders": [{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "us-east-1",
"source_ami": "ami-9eaa1cf6",
"instance_type": "t2.micro",
"ssh_username": "ubuntu",
"ami_name": "packer-example {{timestamp}}"
}],
"provisioners": [{
"type": "shell",
"inline": [
"sleep 30",
"sudo apt-get update",
"sudo apt-get install -y redis-server"
]
}],
"push": {
"name": "ATLAS_USERNAME/packer-tutorial"
}
}
```
To get an Atlas username, [create an account
here](https://atlas.hashicorp.com/account/new?utm_source=oss&utm_medium=getting-started&utm_campaign=packer). Once you have an account, you will need to contact
sales@hashicorp.com to start a trial, if you haven't already done so.
Replace "ATLAS\_USERNAME" with your username in the above config. Generate an
Atlas token by navigating to https://atlas.hashicorp.com/settings/tokens and set
that token as an environment variable: `ATLAS_TOKEN=YOURTOKENHERE`.
Then run `packer push example.json` to send the configuration to Atlas, which
automatically starts the build.
This build will fail since neither `aws_access_key` or `aws_secret_key` are set
in the Atlas environment. To set environment variables in Atlas, navigate to
the [Builds tab](https://atlas.hashicorp.com/builds), click the
"packer-tutorial" build configuration that was just created, and then click
'variables' in the left navigation. Set `aws_access_key` and `aws_secret_key`
with their respective values. Now restart the Packer build by either clicking
'rebuild' in the Atlas UI or by running `packer push example.json` again. Now
when you click on the active build, you can view the logs in real-time.
-> **Note:** Whenever a change is made to the Packer template, you must
`packer push` to update the configuration in Atlas.
## Store Packer Outputs
Now we have Atlas building an AMI with Redis pre-configured. This is great, but
it's even better to store and version the AMI output so it can be easily
deployed by a tool like [Terraform](https://www.terraform.io). The `atlas`
[post-processor](/docs/post-processors/atlas.html) makes this process easier:
```json
{
"variables": ["..."],
"builders": ["..."],
"provisioners": ["..."],
"push": ["..."],
"post-processors": [{
"type": "atlas",
"artifact": "ATLAS_USERNAME/packer-tutorial",
"artifact_type": "amazon.image"
}]
}
```
Update the `post-processors` block with your Atlas username, then
`packer push example.json` and watch the build kick off in Atlas! When the build
completes, the resulting artifact will be saved and stored in Atlas.

View File

@ -26,9 +26,6 @@
<li<%= sidebar_current("intro-getting-started-vagrant") %>>
<a href="/intro/getting-started/vagrant.html">Vagrant Boxes</a>
</li>
<li<%= sidebar_current("intro-getting-started-remote-builds") %>>
<a href="/intro/getting-started/remote-builds.html">Remote Builds</a>
</li>
<li<%= sidebar_current("intro-getting-started-next") %>>
<a href="/intro/getting-started/next.html">Next Steps</a>
</li>