Azure uses the Python client in their Docker image. I've added additional documentation on how to "install" it as well as translations the node.js commands for it.
When booting from a disk image, the Qemu builder resizes the disk to 40000
which is not a multiple of 1kB. This causes problems while booting from the image.
Updating the default disk size to 40960 fixes this issue
I think the intention was to show you can tag, and push the same image to multiple repos, but the example given is to the same repo, twice. This change updates the example so it uses hashicorp/packer1, and hashicorp/packer2.
fixes: #5476
Based on this new template addition:
```
{
"variables": {
"image_version": "",
"triton_account": "",
"triton_key_id": "",
"triton_key_material": ""
},
"builders": [{
"type": "triton",
"triton_account": "{{user `triton_account`}}",
"triton_key_id": "{{user `triton_key_id`}}",
"triton_key_material": "{{user `triton_key_material`}}",
"source_machine_package": "g4-highcpu-128M",
"source_machine_image_filter": {
"name": "ubuntu-16.04",
"most_recent": "true"
},
"ssh_username": "root",
"image_version": "{{user `image_version`}}",
"image_name": "teamcity-server"
}],
"provisioners": [
{
"type": "shell",
"start_retry_timeout": "10m",
"inline": [
"sudo apt-get update -y",
"sudo apt-get install -y nginx"
]
}
]
}
```
I got the following output from packer:
```
packer-testing % make image
packer build \
-var "triton_account=stack72_joyent" \
-var "triton_key_id=40:9d:d3:f9:0b:86:62:48:f4:2e:a5:8e:43:00:2a:9b" \
-var "triton_key_material=""" \
-var "image_version=1.0.0" \
new-template.json
triton output will be in this color.
==> triton: Selecting an image based on search criteria
==> triton: Based, on given search criteria, Machine ID is: "7b5981c4-1889-11e7-b4c5-3f3bdfc9b88b"
==> triton: Waiting for source machine to become available...
==> triton: Waiting for SSH to become available...
==> triton: Connected to SSH!
==> triton: Provisioning with shell script: /var/folders/_p/2_zj9lqn4n11fx20qy787p7c0000gn/T/packer-shell797317310
triton: Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
triton: Hit:2 http://archive.ubuntu.com/ubuntu xenial InRelease
```
I can verify from the triton cli tools that the id `7b5981c4` (from the packer output) is indeed the correct ID
```
terraform [master●] % triton images name=~ubuntu-16.04
SHORTID NAME VERSION FLAGS OS TYPE PUBDATE
49b22aec ubuntu-16.04 20160427 P linux lx-dataset 2016-04-27
675834a0 ubuntu-16.04 20160505 P linux lx-dataset 2016-05-05
4edaa46a ubuntu-16.04 20160516 P linux lx-dataset 2016-05-16
05140a7e ubuntu-16.04 20160601 P linux lx-dataset 2016-06-01
e331b22a ubuntu-16.04 20161004 P linux lx-dataset 2016-10-04
8879c758 ubuntu-16.04 20161213 P linux lx-dataset 2016-12-13
7b5981c4 ubuntu-16.04 20170403 P linux lx-dataset 2017-04-03 <------- THIS IS THE LATEST UBUNTU IMAGE
```
via
- create_options: a list of options passed to lxc-create
- start_options: a list of options passed to lxc-start
- attach_options: a list of options passed to lxc-attach
Also extended existing LXC builder BATS tests to exercise the new builder
options, and added website docs.
An aws_profile option is added to the AWS ECR login credentials
configuration to allow using shared AWS credentials stored in
a non-default profile.
Signed-off-by: Aaron Browne <aaron0browne@gmail.com>
A lot of examples out there on the web make use of this command to
configure the instance to allow connections over WinRM. Since the
danger is not immediately obvious and the failure because of its use
intermittent, we should do our best to advise against its use here.
Use of 'winrm quickconfig' can sometimes cause the Packer build to fail
shortly after the WinRM connection is established.
* When executed the 'winrm quickconfig -q' command configures the
firewall to allow management messages to be sent over HTTP (port 5985)
* This undoes the previous command in the script that configured the
firewall to prevent this access.
* The upshot is that the system is configured and ready to accept WinRM
connections earlier than intended.
* If Packer establishes its WinRM connection immediately after execution
of the 'winrm quickconfig -q' command, the later commands within the
script that restart the WinRM service cause the established
connection, and consequently, the overall build to fail.
Okay, I'm going to merge these docs as-is under the opinion that an example that works 99% of the time is better than no example. @DanHam I'm still happy to test out other boot configs but I've put a fair amount of time into trying to get your suggestions to work and haven't gotten it doing what it's supposed to for AWS.