Compare commits

..

197 Commits

Author SHA1 Message Date
David Dymko
e1f70147d9
vultr remote docs (#11070) 2021-06-08 15:00:03 +02:00
Brandon Romano
6e037aa84b
Merge pull request #11063 from hashicorp/ks.updates-alert-banner-hashiconf
chore: updates alert banner data
2021-06-08 04:22:14 -07:00
Wilken Rivera
4ab255caba
Update absolute links to relative (#11068)
* Update absolute links to relative

* Update packer-integration-program.mdx

Fix case for component names
2021-06-04 11:58:51 +02:00
Jeff Escalante
2b26376180
rotate algolia api key (#11055) 2021-06-03 15:05:22 -04:00
Yoan Blanc
36a7b28b5f
fix: print unchanged formatted file when using stdin (#11047)
Signed-off-by: Yoan Blanc <yoan@dosimple.ch>
2021-06-03 14:02:42 +02:00
Adrien Delorme
4be9dfd183
Say so when an only or an except option did not match anything (#11050)
* Say so when an only or an except option did not match anything
2021-06-03 13:50:34 +02:00
Adrien Delorme
98990fc34e
hcl2_upgrade: allow to hcl2_upgrade with unknown builders, just log errors (#11049) 2021-06-03 13:41:16 +02:00
Jimmy Merritello
21833e7324
Merge pull request #11065 from jmfury/jm.packer-homepage-update
[Website] Update the homepage
2021-06-02 15:48:12 -05:00
Jimmy Merritello
3e3e013be5
Update the homepage 2021-06-02 14:00:34 -05:00
Andy Bettisworth
940180fb5b
spelling (#11064) 2021-06-02 15:18:37 +02:00
Wilken Rivera
1aa7a1b3f8 workflows/issue-comment-created.yml: Add GH token input
Addresses reported error at
https://github.com/hashicorp/packer/runs/2722241524?check_suite_focus=true
2021-06-01 20:45:34 -04:00
Kendall Strautman
86007b04aa chore: updates alert banner data 2021-06-01 14:01:15 -07:00
Wilken Rivera
488e6d80aa .github/workflows/lock.yml: Fix ISO 8601 date format 2021-05-27 13:50:05 -04:00
Wilken Rivera
0a05b834d7
remove hashibot (#11053)
* Replace `closed_issue_locker` HashiBot action with GitHub action

Related to: #11043

* Replace  with GitHub action

* Replace  with GitHub action
2021-05-27 12:58:58 -04:00
Megan Marsh
f80da79b85
Merge pull request #11046 from hashicorp/extract_oneandone
Extract 1&1 builder
2021-05-25 08:44:25 -07:00
Wilken Rivera
f3f58b1c39
Add Packer Integration Program page (#11042)
* Initial draft of Packer Integration Program docs page

- [ ] Add text from program document.
- [ ] Fix image alignment
- [ ] Fix Checklist alignment (remove bullets if possible)
- [ ] Validate with program team

* Fix broken markdown

* fix styling on centered image and checklist

* revert package-lock update

* Update packer-integration-program.mdx

Fix a few formatting issues

Co-authored-by: Jeff Escalante <jescalan@users.noreply.github.com>
2021-05-25 09:22:12 -04:00
sylviamoss
8eb85ac0e5 put back empty datasource folder 2021-05-25 14:49:45 +02:00
sylviamoss
444605c127 vendor oneandone 2021-05-25 14:46:44 +02:00
sylviamoss
e3010fa817 extract oneandone and add remote docs 2021-05-25 14:26:22 +02:00
Megan Marsh
638be57e43
document gotcha around third party plugins (#11032)
* document gotcha around third party plugins
* Update website/content/docs/commands/hcl2_upgrade.mdx

Co-authored-by: Adrien Delorme <azr@users.noreply.github.com>
2021-05-25 13:55:25 +02:00
Sylvia Moss
69572b442a
Add tip about using local instead of locals for complex cases (#11036)
* add tip about using local instead of locals

* Apply suggestions from code review

Co-authored-by: Megan Marsh <megan@hashicorp.com>

Co-authored-by: Wilken Rivera <wilken@hashicorp.com>
Co-authored-by: Megan Marsh <megan@hashicorp.com>
2021-05-24 21:50:35 -04:00
Jeff Escalante
f07813e14d
fix a typo on the downloads page (#11044) 2021-05-24 16:45:52 -04:00
Megan Marsh
5a14b11f2e update changelog 2021-05-19 15:24:38 -07:00
Megan Marsh
fe6077fc85
Merge pull request #11024 from hashicorp/remove_veewee_guide
Remove veewee guide
2021-05-18 13:57:05 -07:00
Megan Marsh
df89b4b52c
Merge pull request #11025 from hashicorp/remove_isotime_guide
Remove isotime guide
2021-05-18 13:31:36 -07:00
Megan Marsh
70739cf1a1
Merge pull request #11030 from hashicorp/remove_comment_guide
delete comment guide, moving contents to the comment section of the t…
2021-05-18 13:19:13 -07:00
Megan Marsh
ca513aa028 upgrade to azure v0.0.3 2021-05-18 09:56:39 -07:00
Megan Marsh
57639e8330 remove workflow-tips-and-tricks pages 2021-05-18 09:49:35 -07:00
Megan Marsh
41c0668d6e delete comment guide, moving contents to the comment section of the template engine docs for json templates 2021-05-18 09:43:42 -07:00
Megan Marsh
b6a9da11a2 clarify template variable warning 2021-05-18 09:21:26 -07:00
Megan Marsh
a3daf4c686 escape backticks 2021-05-18 09:18:40 -07:00
Megan Marsh
d9b874f118 tweak engine docs to contain guide docs for isotime
remove docs page
2021-05-18 09:08:25 -07:00
Megan Marsh
1d4a8e7ba2 delete veewee references. Veewee hasn't been updated for three years, and the veewee-to-packer library for even longer. This page gets almost no views. 2021-05-18 08:44:45 -07:00
Ilya Voronin
6867456a72
Implemented DEFAULT_NAME handling for datasource plugins (#11026) 2021-05-18 10:47:43 +02:00
Brandon Romano
b6db1ac319
Merge pull request #11023 from hashicorp/br.hashiconf-banner
Adds AlertBanner to promote HashiConf EU
2021-05-17 15:32:16 -07:00
Brandon Romano
5a290aabef Adds AlertBanner to promote HashiConf EU 2021-05-17 15:07:18 -07:00
Megan Marsh
cb09e62fac
Merge pull request #11017 from hashicorp/d-hcl2upgrade-tutorial
Add link to the new HCL2 upgrade command tutorial
2021-05-14 11:34:16 -07:00
Wilken Rivera
d24fa1d8ad Add link to the new HCL2 upgrade command tutorial
This change replaces the from JSON v1 guide with a redirect to the
released hcl2_upgrade command tutorial on learn.hashicorp.com
2021-05-14 14:17:25 -04:00
Wilken Rivera
c262467413
Extract Azure plugin components from Packer (#10979)
* Remove Azure plugin components and docs

* Add Azure plugin to vendored_plugins

* Updates Azure to use remote plugin docs

* Revendor packer-plugin-azure at v0.0.2

* Update to new version of the Packer plugin SDK v0.2.1
* Update to the official version of go-getter@v2.0.0
* Update salt provisioner to use new go-getter API

* Update vendored plugins to working versions

This changes fixes an issue with the go.sum for the Azure plugin.
It also revendors the plugins for puppet, chef, and ansible as v0.0.1
of those plugins where not usable.
2021-05-13 16:13:21 -04:00
Wilken Rivera
a8db14ab6b
Fix broken links (#11016) 2021-05-13 12:16:04 -04:00
Geoffrey Grosenbach
f900049fc2
Fix links to Learn (#11014)
Some links to introductory tutorials were incorrect after some URLs changed. This fixes the links and the redirects.
2021-05-13 12:15:02 -04:00
Megan Marsh
f24599ecca
Merge pull request #11005 from ryapric/packer-docs-build-kvm-in-container
Rough guide-doc update for QEMU KVM builds from within containers
2021-05-11 10:04:11 -07:00
Ryan J. Price
32dc0aa731 Remove extra parenthesis 2021-05-11 11:00:08 -05:00
Sergio Conde Gómez
f6f2469d4d
fix: HCL "index" function now actually returns the index of the element (#11008) 2021-05-11 10:24:46 +02:00
Ryan J. Price
f26a764b8a Clarify paragraph wording 2021-05-10 20:32:23 -05:00
Ryan J. Price
09676ac699 Missed VMWare in CICD Guide change 2021-05-10 20:27:37 -05:00
Ryan J. Price
90b6747e3d Edit CICD website guide to be more clear 2021-05-10 20:24:37 -05:00
Megan Marsh
8c93297eb0
update docs to use double quotes, which works on more shells (#11009) 2021-05-10 16:00:53 -04:00
Ryan J. Price
627cbcdcba Rought guide-doc update for QEMU KVM builds from within containers 2021-05-08 17:03:23 -05:00
Steph Gosling
1e51ed9df9
fix: specify a string for name rather than a variable which is not supported (#10987) 2021-05-07 11:02:03 +02:00
Zachary Shilton
17899731ea
website: support hidden pages in nav-data (#10993)
* website: bump to docs-page prerelease with hidden page support

* website: remove temporary check for hidden pages, now covered by docs-page

* website: bump to stable docs-page, w next-mdx-remote bump

* website: bump to latest markdown-page
2021-05-06 13:19:26 -04:00
Kendall Strautman
a9c2283ee5
chore(website): adds ts config, upgrades deps (#10905)
* chore(website): adds ts config, upgrades deps

* chore: adjust ts config option

* chore: upgrade deps

* fix: subnav active path routing

* style(home): update integrations bg color

* chore: upgrade deps

* style: fix body copy color

* style: fix product downloads page height

* feat: updates favicon

* chore(downloads): upgrade to prerelease

* chore: upgrades product download page to stable

* chore: update favicon.ico
2021-05-03 10:58:09 -07:00
Megan Marsh
5555987e4d clean up changelog entries 2021-04-27 11:55:45 -07:00
Megan Marsh
b66a8b783a update changelog 2021-04-26 14:46:55 -07:00
Megan Marsh
28567dedb8
Merge pull request #10977 from ShiChangkuo/huaweicloud_plugin
website: add huaweicloud remote plugin
2021-04-26 11:35:06 -07:00
Megan Marsh
69705f3e40
Merge pull request #10978 from hashicorp/azr_invalid_prov_pause_before
Fix Invalid provprovisioner pause_before panic
2021-04-26 11:31:42 -07:00
Adrien Delorme
bf939b7474 Update types.build.provisioners.go
if pause_before is invalid, that's an error
2021-04-26 17:37:24 +02:00
Adrien Delorme
e60a7e60b9 add test to repro #10975 2021-04-26 17:31:22 +02:00
ShiChangkuo
05ef5590a5 website: add huaweicloud remote plugin 2021-04-26 20:24:50 +08:00
Megan Marsh
4ebc09f08f
Merge pull request #10863 from notchairmk/az-sig-account
azure arm: support for shared image gallery storage account type
2021-04-23 16:57:58 -07:00
Taylor Chaparro
9441a69ad1 azure builder: add support for shared image gallery storage account type 2021-04-23 16:57:24 -07:00
Taylor Chaparro
c0902d7519 azure builder: use struct for publishing shared image gallery image version 2021-04-23 16:56:38 -07:00
Megan Marsh
61db31e8ae
Merge pull request #10045 from ContigoRed/azure_keep_os_disk
Azure: arm builder: adding keep_os_disk parameter to control OS disk deletion
2021-04-23 16:55:17 -07:00
Megan Marsh
1788d29567 fix 2021-04-23 16:50:41 -07:00
Contigo Red
82a1f017aa azure: arm builder: adding keep_os_disk parameter to control OS disk deletion.
keep_os_disk: auto generated help
azure: arm builder: add disk os to artifact
azure: arm builder: fmt'ed artifact_test.go
2021-04-23 12:36:09 -07:00
Zachary Shilton
95ed4443bb
website: check for unlinked content, rm split-out vagrant content (#10958) 2021-04-23 17:04:20 +02:00
Sylvia Moss
0bd7b20bb8
website: add the possibility to fetch a local docs.zip for remote plugins (#10973) 2021-04-23 16:33:32 +02:00
Megan Marsh
8a3912c54b
Merge pull request #10938 from AHuusom/master
Added custom nicname and osdiskname
2021-04-22 11:05:19 -07:00
Adrien Delorme
38837848f9
Breakout yandex (#10970) 2021-04-22 17:03:14 +02:00
Sylvia Moss
e681669c70
Remove codecov config file (#10969) 2021-04-22 16:58:43 +02:00
Sylvia Moss
58bf783a2f
Update plugins badge (#10968) 2021-04-22 16:58:17 +02:00
Sylvia Moss
bcb25f1916
Extract Tencent Cloud (#10967)
* extract and vendor tencentcloud plugin

* fix fmt
2021-04-22 15:21:34 +02:00
Adrien Delorme
ef612c0eb1
Breakout hcloud (#10966)
* Delete hetzner-cloud.mdx

* delete hcloud builder

* use hcloud plugin

* up mods

* use github.com/hashicorp/packer-plugin-hcloud v0.0.1
2021-04-22 14:52:07 +02:00
Sylvia Moss
972497589e
extract and vendor lxc and lxd (#10965) 2021-04-22 14:21:23 +02:00
Adrien Delorme
2cd296874e
Triton plugin breakout (#10963) 2021-04-22 14:06:30 +02:00
Megan Marsh
f161f2bed2
extract oracle plugin (#10962) 2021-04-22 11:50:00 +02:00
Megan Marsh
6b59525408
remove digitalocean directories, revendor, add to vendored_plugins, regenerate code, and update website paths (#10961) 2021-04-22 11:45:27 +02:00
Megan Marsh
d0a15f9a15
Merge pull request #10956 from hashicorp/extract-converge
Extract converge provisioner
2021-04-21 13:40:15 -07:00
Megan Marsh
af37f53439
Extract vagrant (#10960)
* remove vagrant, rework website

* regenerate command/plugin, and go mod tidy
2021-04-21 16:31:28 -04:00
Wilken Rivera
d0588580fa Empty commit to trigger Vercel 2021-04-21 16:14:50 -04:00
Megan Marsh
ccbd0a29cc
remove outdatedlinode codeowners (#10957) 2021-04-21 15:38:35 -04:00
Zachary Shilton
d08aa6f6b0
website: update readme (#10931)
* website: bump to nextjs-scripts prerelease

* website: update stale sections in readme

* website: bump nextjs-scripts to latest prerelease

* website: update docs-sidebar section to prep for shared section

* website: update readme with latest blocks

* website: revert bump to nextjs-scripts, deferred

* website: bump to latest nextjs-scripts
2021-04-21 15:35:52 -04:00
Wilken Rivera
bb511e9592 Extract converge provisioner 2021-04-21 14:19:34 -04:00
Megan Marsh
b3ba270f2d fix typo and regenerate 2021-04-21 11:19:16 -07:00
Sylvia Moss
4be2c350bf
extract and vendor ucloud (#10953) 2021-04-21 13:25:04 -04:00
Megan Marsh
bc35a737c0
remove codecov from repo (#10955) 2021-04-21 13:22:34 -04:00
Megan Marsh
b5666b84cd
Extract jdcloud (#10946)
* delete jdcloud builder dir, revendor, regenerate, add to vendored_plugins

* change website pathing

* Extract linode (#10947)

* started extracting linode

* revendor linode

* clean up vendoring
2021-04-21 10:55:41 -04:00
Sylvia Moss
469f033c36
remove and vendor hyperv (#10952) 2021-04-21 16:32:34 +02:00
Sylvia Moss
2db338e322
Extract Hyperone (#10949) 2021-04-21 15:08:38 +02:00
Adrien Delorme
0f6a081724
Remove the vendor dir (#10916)
* update ci scripts
2021-04-21 10:52:55 +02:00
Megan Marsh
4b093aab78
Merge pull request #10934 from hashicorp/extract_cloudstack
Extract cloudstack
2021-04-20 13:52:36 -07:00
Megan Marsh
5145893ae5 update website 2021-04-20 13:47:37 -07:00
Megan Marsh
9044deeb05 Delete cloudstack dir, revendor 2021-04-20 13:46:11 -07:00
Megan Marsh
dc63b9a7a4
Merge pull request #10943 from hashicorp/extract-puppet
Extract Puppet plugins
2021-04-20 13:44:20 -07:00
Megan Marsh
d8c3584b46
Merge pull request #10921 from hashicorp/extract-chef
Extract Chef Plugins
2021-04-20 13:42:55 -07:00
Wilken Rivera
386f7c3f56 Remove duplicate routs from rebase 2021-04-20 15:27:21 -04:00
Wilken Rivera
8bf03cbca7 Vendor packer-plugin-puppet 2021-04-20 15:27:21 -04:00
Wilken Rivera
eb6527c8b6 Remove Puppet components and docs 2021-04-20 15:27:21 -04:00
Wilken Rivera
fd028b71b3 Add remote docs 2021-04-20 15:27:21 -04:00
Wilken Rivera
0ec3ad3db1 Add remote docs 2021-04-20 15:27:21 -04:00
Wilken Rivera
a29f3340c2 Vendor packer-plugin-chef 2021-04-20 15:26:56 -04:00
Wilken Rivera
45dfb97ff4 Add remote docs 2021-04-20 15:26:56 -04:00
Wilken Rivera
30bcf44c2c Remove Chef components and docs 2021-04-20 15:25:08 -04:00
Megan Marsh
2da9c21733
Merge pull request #10944 from hashicorp/openstack-extraction-followup
Remove reference to openstackbuilder
2021-04-20 11:01:30 -07:00
Wilken Rivera
987080a409 Remove reference to openstackbuilder 2021-04-20 13:54:57 -04:00
Anders Huusom
ed4ca8e6dc formatted code 2021-04-20 19:53:49 +02:00
Megan Marsh
9bdb809edd
Merge pull request #10933 from hashicorp/extract_openstack
extract openstack into its own plugin
2021-04-20 10:28:00 -07:00
Megan Marsh
88192f1fdd delete openstack files 2021-04-20 10:17:14 -07:00
Megan Marsh
6fa213235f extract and revendor
update website nav
2021-04-20 10:17:10 -07:00
Megan Marsh
af34218909
Merge pull request #10932 from hashicorp/remove-alicloud
extract alicloud plugin to its own repo
2021-04-20 10:16:41 -07:00
Megan Marsh
2dbfef5750
Update website/data/docs-remote-plugins.json
Co-authored-by: Wilken Rivera <wilken@hashicorp.com>
2021-04-20 10:11:32 -07:00
Megan Marsh
2f927177d9 fix website 2021-04-20 09:54:49 -07:00
Megan Marsh
50fadc4118 remove from website, add remote docs 2021-04-20 09:54:49 -07:00
Megan Marsh
b54121a72d delete and revendor alicloud plugin 2021-04-20 09:54:45 -07:00
Adrien Delorme
4de2954d01
Scaleway plugin breakout (#10939)
* use vendored scaleway plugin

* wipe out scaleway

* vendor vendors

* use remote docs

* go get github.com/hashicorp/packer-plugin-scaleway@v0.0.1

* empty commit
2021-04-20 11:59:59 -04:00
Sylvia Moss
25a999978b
Remove Parallels plugin (#10936) 2021-04-20 17:46:42 +02:00
Sylvia Moss
d6904502ac
Extract outscale (#10941)
* remove outscale, vendor it and add remote docs

* fix lint

* add community plugin tier

* Update go.mod

* up mods

Co-authored-by: Adrien Delorme <azr@users.noreply.github.com>
2021-04-20 17:18:45 +02:00
Wilken Rivera
2061aa9e69
Add pluginTier to community plugins (#10942) 2021-04-20 10:49:35 -04:00
Zachary Shilton
cbe06050b4
website: enable plugin tier override (#10919)
* website: enable plugin tier override

* website: validate remote plugins config pluginTier
2021-04-20 10:28:41 -04:00
Anders Huusom
de86b45848 removed obsolete line 2021-04-20 15:54:29 +02:00
Anders Huusom
770124207c added TempOSDiskName definition 2021-04-20 15:51:16 +02:00
Adrien Delorme
d22ba61ae0
ncloud breakout (#10937) 2021-04-20 15:09:11 +02:00
Adrien Delorme
9eaad88ac0
Move proxmox builder out + vendor it (#10930)
* use vendored proxmox builders

* Update docs-nav-data.json

remove proxmox ref

* Update docs-remote-plugins.json

* remove builder/proxmox dir

* remove website/content/docs/builders/proxmox/

* up vendors

* Update modules.txt

* Update HTTPConfig-not-required.mdx

* Update HTTPConfig-not-required.mdx

* tidy mod

* fmt

* Update modules.txt
2021-04-20 14:59:34 +02:00
Anders Huusom
dbe6010510 Added custom nicname and osdiskname 2021-04-20 14:16:27 +02:00
Megan Marsh
20faaef05c
Merge pull request #10929 from hashicorp/extract_qemu
Extract QEMU plugin
2021-04-19 15:39:44 -07:00
Megan Marsh
d0a4a71da8
Merge pull request #10927 from hashicorp/fix_typo
Fix TEMPATE to TEMPLATE in fmt cmd help text
2021-04-19 09:24:50 -07:00
sylviamoss
3af472be9a update qemu to latest version 2021-04-19 17:47:09 +02:00
sylviamoss
5a00020830 add qemu to docs-remote-plugins.json 2021-04-19 17:45:51 +02:00
Adrien Delorme
6094c97998 Update docs-remote-plugins.json
order alphabetically
2021-04-19 17:20:46 +02:00
Adrien Delorme
1a41eac70b Update vendored_plugins.go
order alphabetically
2021-04-19 17:05:21 +02:00
sylviamoss
7a85b7328e vendor qemu plugin 2021-04-19 16:32:04 +02:00
Sylvia Moss
3dac34766c
add legacy_isotime docs (#10928) 2021-04-19 16:29:43 +02:00
sylviamoss
642ed07476 remote qemu plugin 2021-04-19 16:28:12 +02:00
Sylvia Moss
88f8feecfe
Extract vmware plugin (#10920) 2021-04-19 14:28:48 +02:00
sylviamoss
b448c3182c fix TEMPATE to TEMPLATE in fmt cmd 2021-04-19 14:07:22 +02:00
Adrien Delorme
9230a06920
move googlecompute plugin to github.com/hashicorp/packer-plugin-googlecompute (#10890) 2021-04-19 11:10:15 +02:00
Sylvia Moss
16658a9f47
Extract virtualbox plugin (#10910) 2021-04-16 17:38:02 +02:00
Wilken Rivera
ceb96d061a
Extract ansible plugins (#10912)
* Remove ansible components and docs

* Vendored packer-plugin-ansible

* Add remote ansible docs
2021-04-16 10:31:09 -04:00
Romain Lecat
bb1a025f60
Add outscale-mgo to osc codeowners (#10917) 2021-04-16 15:25:44 +02:00
Adrien Delorme
87ba7258b3
Use packer-sdc in packer + remove mapstructure-to-hcl2 & struct-markdown (#10913)
* start using `go:generate packer-sdc struct-markdown`

* Update Makefile

remove @go install ./cmd/struct-markdown

* run go generate for struct-markdown

* use //go:generate packer-sdc mapstructure-to-hcl2

* run go generate for mapstructure-to-hcl2

* remove struct-markdown and mapstructure-to-hcl2

* vendor vendors
2021-04-16 11:52:03 +02:00
Megan Marsh
da312e2785
Merge pull request #10896 from hashicorp/extract_vsphere
Extract vSphere plugin
2021-04-15 16:30:27 -07:00
Megan Marsh
84af0ba6da go mod tidy 2021-04-15 16:25:58 -07:00
sylviamoss
3c6b7841bc fix vsphere link 2021-04-15 16:25:36 -07:00
sylviamoss
c7ee5f1efd update packer-plugin-vsphere and sdk 2021-04-15 16:25:36 -07:00
sylviamoss
a00846102b add vsphere to docs-remote-plugins.json 2021-04-15 16:25:36 -07:00
sylviamoss
41c66d6935 vendor vsphere plugin 2021-04-15 16:25:31 -07:00
sylviamoss
f6854f5528 update go vendor 2021-04-15 16:24:57 -07:00
sylviamoss
38fe79948b remove vsphere components and docs 2021-04-15 16:24:57 -07:00
Daniel Finneran
a6c5958c67
Adds bzip2 support to post-processor (#10867)
* compress post processor: add bzip2 + tests

* post-processor/compress/post-processor_test.go: refactor tests and add tests for bzip2

* post-processor_test.go: test write/read for all compression algos

* check artifact.Destroy() errors

* close archive before deleting it

Co-authored-by: Adrien Delorme <azr@users.noreply.github.com>
2021-04-15 18:05:09 +02:00
Jeff Escalante
c17f236e85
Upgrade Downloads Page (#10907)
* upgrade downloads page

* fix syntax errors on builders/ncloud page
2021-04-14 14:51:29 -04:00
Megan Marsh
bb5d7b6c40
Merge pull request #10870 from NaverCloudPlatform/master
Support ncloud vpc version
2021-04-13 10:32:16 -07:00
sangkyu-kim
1ea5a547e2
Merge branch 'master' into master 2021-04-13 13:46:48 +09:00
Megan Marsh
86b8ce8df0
Postprocessor only docs (#10899)
* add a note for only/except from cli to the post-processor template section

* typo; missing space

* Update website/content/docs/templates/hcl_templates/blocks/build/post-processor.mdx

Co-authored-by: Wilken Rivera <dev@wilkenrivera.com>

* tweak wording

Co-authored-by: Wilken Rivera <dev@wilkenrivera.com>
2021-04-12 15:39:27 -04:00
Megan Marsh
b51bf9250e
Merge pull request #10900 from hashicorp/digitalocean-import-docs-fix
Update URL to custom-images overview page
2021-04-12 12:18:00 -07:00
Wilken Rivera
2d5a32629a Update URL to custom-images overview page 2021-04-12 10:30:25 -04:00
Kerim Satirli
3e2db82cab
fixes typo (#10894) 2021-04-09 11:59:08 +02:00
Megan Marsh
bc9dd69669
Merge pull request #10880 from hashicorp/amazon_acc_test
Add plugin acceptance test using the Amazon plugin
2021-04-08 13:31:32 -07:00
Megan Marsh
734e91b97c
Merge pull request #10878 from hashicorp/rewrite_acctests
Move acctest pkg from the SDK to core and update acceptance tests
2021-04-08 13:21:18 -07:00
Zachary Shilton
ab0d1ee363
website: fix edit links for remote plugins (#10884)
* website: fix issue with edits links, use branch name, not version

* website: patch layout shift issue related to global style

* website: update plugin config docs with sourceBranch

* website: tweak spacing above plugin tier label

* website: add note on default value for sourceBranch
2021-04-08 10:09:58 -04:00
Kerim Satirli
cf94fd1778
switches JSON and HCL2 tabs (#10888)
* switches JSON and HCL2 tabs for all provisioners

* corrects `packer` to `Packer`

* corrects `http` to `HTTP`

* corrects typos and highlighting consistency issues

* corrects typos and highlighting consistency issues

* corrects typos and highlighting consistency issues

* `ansible` -> `Ansible`

* `packer fmt` for HCL2 blocks in provisioners

* linting and spelling

* fixes formatting

* fixes formatting

* Update website/content/docs/provisioners/ansible.mdx

Co-authored-by: Adrien Delorme <azr@users.noreply.github.com>

* fixes formatting

* improves example

* generate stuff

Co-authored-by: Adrien Delorme <azr@users.noreply.github.com>
2021-04-08 15:02:57 +02:00
George Wilson
15a2e59bba
Autogenerated docs for ansible local provisioner (#10829) 2021-04-08 12:18:49 +02:00
Kerim Satirli
8c4eb5f4aa
corrects default value and adds highlighting (#10886)
* value is expected to be `ssh` not `SSH`

* highlighting default values
2021-04-08 12:02:05 +02:00
Kendall Strautman
86788220a9
Merge pull request #10875 from hashicorp/ks.style/branding-refresh
style(website): upgrade react-components, colors, logos
2021-04-07 08:02:13 -07:00
Kerim Satirli
e3bcb4f2ac
adds highlighting to locals stanza (#10881) 2021-04-07 16:38:23 +02:00
Kendall Strautman
ccd0430fda
Update website/pages/downloads/style.css
Co-authored-by: Zachary Shilton <4624598+zchsh@users.noreply.github.com>
2021-04-07 07:35:05 -07:00
sylviamoss
49474f8f37 add plugin acceptance test using amazon plugin 2021-04-07 16:21:15 +02:00
sylviamoss
e0614cabf4 move acctest pkg from sdk to core and update acceptance tests 2021-04-07 11:52:19 +02:00
Shaun McEvoy
eaec3e5564
Add Image Storage Locations field to Google Compute Import post-processor (#10864)
* add image storage locations to Google Compute Import
2021-04-07 10:36:08 +02:00
James
eaaf22dcde
builder/digitalocean: support ecdsa, ed25519, dsa temporary key types via packer-plugin-sdk/communicator step… (#10856)
* support ecdsa, ed25519 temporary key algos via temporary_key_pair_algorith config

* builder/digitalocean: improve public key marshalling error handling

* builder/digitalocean: use packer-plugin-sdk to manage temporary ssh keys

* builder/digitalocean: clean up unused properties

Co-authored-by: tserkov <tserkov@penguin>
2021-04-07 10:33:05 +02:00
Qingchuan Hao
f2f33fa344
Correct SIG timout (#10816) 2021-04-07 10:00:17 +02:00
Kendall Strautman
9189f1228e chore: adds comment 2021-04-06 15:37:05 -07:00
Kendall Strautman
5e7b5729e6 style: override product-downloader colors 2021-04-06 15:35:23 -07:00
Megan Marsh
9365c90c0b revendor 2021-04-06 13:52:07 -07:00
Megan Marsh
f27bdf85f4 upgrade exoscale dependency 2021-04-06 13:51:14 -07:00
Kendall Strautman
9f7bb4da25 chore: fix deps 2021-04-06 13:48:00 -07:00
Kendall Strautman
b1967e99c7 chore: upgrade react-components, colors, logos 2021-04-06 13:47:58 -07:00
Brandon Romano
7efb41868f
Upgrade StackMenu to latest (#10874) 2021-04-06 16:40:48 -04:00
Wilken Rivera
260906c3e4
Add redirect for Docker post-processor pages (#10872)
Remote plugin docs such as Docker now fall under a top level path named
after the provider (e.g https://packer.io/docs/docker/...). This change
adds a redirect for the old URLs to the new location.
2021-04-06 10:37:05 -04:00
Roshan Padaki
f65e1d5d55
Fix tiny typo in hcl2_upgrade.mdx (#10868) 2021-04-06 11:51:10 +02:00
sangkyu.kim
23e8684aae fix lint, fmt, generate 2021-04-06 11:40:41 +09:00
sangkyu.kim
e22d9861aa update modules 2021-04-06 10:56:16 +09:00
sangkyu-kim
1c8fc65223
Merge branch 'master' into master 2021-04-06 10:35:58 +09:00
sangkyu-kim
42ca66752f
Merge pull request #1 from NaverCloudPlatform/ncloud_vpc
Implement VPC
2021-04-06 10:27:50 +09:00
packer-ci
33461126e2 Putting source back into Dev Mode 2021-04-05 23:32:25 +00:00
packer-ci
1f834e229a
Cut version 1.7.2 2021-04-05 22:55:12 +00:00
packer-ci
4417f8b3bf cut version 1.7.2 2021-04-05 22:55:11 +00:00
packer-ci
8db540a935 update changelog 2021-04-05 22:55:11 +00:00
sangkyu.kim
15a9e1b20a skip validate product_code if empty 2021-04-02 18:18:59 +09:00
sangkyu.kim
3f23a5ec74 Remove getClassicServerImageProductList within block storage size 100GB 2021-04-02 16:38:58 +09:00
sangkyu.kim
f4cbb5d7dc fix length validation message 2021-04-02 14:55:05 +09:00
sangkyu.kim
cdcdf6a618 fix checking subnet type 2021-03-31 18:22:22 +09:00
sangkyu.kim
74434b3c3e create temporary ACG for VPC 2021-03-31 15:15:57 +09:00
sangkyu.kim
3a11352dfa Fix ncloud builder unhandled buildvar type 2021-03-30 13:59:59 +09:00
sangkyu.kim
56728a937b update ncloud guide 2021-03-30 11:53:42 +09:00
sangkyu.kim
f044a64014 fix test code 2021-03-29 22:51:04 +09:00
sangkyu.kim
cd370aaaad implement vpc environment 2021-03-29 22:51:04 +09:00
sangkyu.kim
af865b1591 update ncloud-sdk-go-v2 vendor 2021-03-29 22:51:03 +09:00
10254 changed files with 3501 additions and 2997047 deletions

View File

@ -1,6 +1,5 @@
orbs:
win: circleci/windows@1.0.0
codecov: codecov/codecov@1.0.5
version: 2.1
@ -20,10 +19,13 @@ commands:
type: string
GOVERSION:
type: string
HOME:
type: string
default: "~"
steps:
- checkout
- run: curl https://dl.google.com/go/go<< parameters.GOVERSION >>.<< parameters.GOOS >>-amd64.tar.gz | tar -C ~/ -xz
- run: ~/go/bin/go test ./... -coverprofile=coverage.txt -covermode=atomic
- run: curl https://dl.google.com/go/go<< parameters.GOVERSION >>.<< parameters.GOOS >>-amd64.tar.gz | tar -C << parameters.HOME >>/ -xz
- run: << parameters.HOME >>/go/bin/go test ./... -coverprofile=coverage.txt -covermode=atomic
install-go-run-tests-windows:
parameters:
GOVERSION:
@ -61,8 +63,6 @@ jobs:
steps:
- checkout
- run: TESTARGS="-coverprofile=coverage.txt -covermode=atomic" make ci
- codecov/upload:
file: coverage.txt
test-darwin:
executor: darwin
working_directory: ~/go/github.com/hashicorp/packer
@ -70,8 +70,6 @@ jobs:
- install-go-run-tests-unix:
GOOS: darwin
GOVERSION: "1.16"
- codecov/upload:
file: coverage.txt
test-windows:
executor:
name: win/vs2019
@ -79,8 +77,6 @@ jobs:
steps:
- install-go-run-tests-windows:
GOVERSION: "1.16"
- codecov/upload:
file: coverage.txt
check-lint:
executor: golang
resource_class: xlarge
@ -90,15 +86,6 @@ jobs:
- run:
command: make ci-lint
no_output_timeout: 30m
check-vendor-vs-mod:
executor: golang
working_directory: /go/src/github.com/hashicorp/packer
environment:
GO111MODULE: "off"
steps:
- checkout
- run: GO111MODULE=on go run . --help
- run: make check-vendor-vs-mod
check-fmt:
executor: golang
steps:
@ -207,7 +194,6 @@ workflows:
check-code:
jobs:
- check-lint
- check-vendor-vs-mod
- check-fmt
- check-generate
build_packer_binaries:

View File

@ -1,18 +0,0 @@
comment:
layout: "flags, files"
behavior: default
require_changes: true # only comment on changes in coverage
require_base: yes # [yes :: must have a base report to post]
require_head: yes # [yes :: must have a head report to post]
after_n_builds: 3 # wait for all OS test coverage builds to post comment
branches: # branch names that can post comment
- "master"
coverage:
status:
project: off
patch: off
ignore: # ignore hcl2spec generated code for coverage and mocks
- "**/*.hcl2spec.go"
- "**/*_mock.go"

5
.github/labeler-issue-triage.yml vendored Normal file
View File

@ -0,0 +1,5 @@
bug:
- 'panic:'
crash:
- 'panic:'

View File

@ -20,6 +20,7 @@ async function checkPluginDocs() {
console.log(`\n${COLOR_BLUE}${repo}${COLOR_RESET} | ${title}`);
console.log(`Fetching docs from release "${version}" …`);
try {
// Validate that all required properties are present
const undefinedProps = ["title", "repo", "version", "path"].filter(
(key) => typeof pluginEntry[key] == "undefined"
);
@ -34,6 +35,30 @@ async function checkPluginDocs() {
)} are defined. Additional information on this configuration can be found in "website/README.md".`
);
}
// Validate pluginTier property
const { pluginTier } = pluginEntry;
if (typeof pluginTier !== "undefined") {
const validPluginTiers = ["official", "community"];
const isValid = validPluginTiers.indexOf(pluginTier) !== -1;
if (!isValid) {
throw new Error(
`Failed to validate plugin docs config. Invalid pluginTier "${pluginTier}" found for "${
title || pluginEntry.path || repo
}". In "website/data/docs-remote-plugins.json", the optional pluginTier property must be one of ${JSON.stringify(
validPluginTiers
)}. The pluginTier property can also be omitted, in which case it will be determined from the plugin repository owner.`
);
}
}
// Validate that local zip files are not used in production
if (typeof pluginEntry.zipFile !== "undefined") {
throw new Error(
`Local ZIP file being used for "${
title || pluginEntry.path || repo
}". The zipFile option should only be used for local development. Please omit the zipFile attribute and ensure the plugin entry points to a remote repository.`
);
}
// Attempt to fetch plugin docs files
const docsMdxFiles = await fetchPluginDocs({ repo, tag: version });
const mdxFilesByComponent = docsMdxFiles.reduce((acc, mdxFile) => {
const componentType = mdxFile.filePath.split("/")[1];

View File

@ -0,0 +1,17 @@
name: Issue Comment Created Triage
on:
issue_comment:
types: [created]
jobs:
issue_comment_triage:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions-ecosystem/action-remove-labels@v1
with:
github_token: "${{ secrets.GITHUB_TOKEN }}"
labels: |
stale
waiting-reply

16
.github/workflows/issues-opened.yml vendored Normal file
View File

@ -0,0 +1,16 @@
name: Issue Opened Triage
on:
issues:
types: [opened]
jobs:
issue_triage:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: github/issue-labeler@v2
with:
repo-token: "${{ secrets.GITHUB_TOKEN }}"
configuration-path: .github/labeler-issue-triage.yml

29
.github/workflows/lock.yml vendored Normal file
View File

@ -0,0 +1,29 @@
name: 'Lock Threads'
on:
schedule:
- cron: '50 1 * * *'
# Only 50 issues will be handled during a given run.
jobs:
lock:
runs-on: ubuntu-latest
steps:
- uses: dessant/lock-threads@v2
with:
github-token: ${{ github.token }}
issue-lock-comment: >
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
issue-lock-inactive-days: '30'
# Issues older than 180 days ago should be ignored
issue-exclude-created-before: '2020-11-01'
pr-lock-comment: >
I'm going to lock this pull request because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
pr-lock-inactive-days: '30'
# Issues older than 180 days ago should be ignored
pr-exclude-created-before: '2020-11-01'

2
.gitignore vendored
View File

@ -13,6 +13,8 @@ test/.env
*.received.*
*.swp
vendor/
website/.bundle
website/vendor

View File

@ -85,8 +85,7 @@ run:
# If invoked with -mod=vendor, the go command assumes that the vendor
# directory holds the correct copies of dependencies and ignores
# the dependency descriptions in go.mod.
modules-download-mode: vendor
modules-download-mode: readonly
# output configuration options
output:

View File

@ -1,28 +1,3 @@
behavior "regexp_issue_labeler" "panic_label" {
regexp = "panic:"
labels = ["crash", "bug"]
}
behavior "remove_labels_on_reply" "remove_stale" {
labels = ["waiting-reply", "stale"]
only_non_maintainers = true
}
poll "closed_issue_locker" "locker" {
schedule = "0 50 1 * * *"
closed_for = "720h" # 30 days
max_issues = 500
sleep_between_issues = "5s"
no_comment_if_no_activity_for = "4320h" # 180 days
message = <<-EOF
I'm going to lock this issue because it has been closed for _30 days_ . This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
EOF
}
poll "label_issue_migrater" "remote_plugin_migrater" {
schedule = "0 20 * * * *"
new_owner = "hashicorp"

View File

@ -1,4 +1,101 @@
## 1.7.2 (Upcoming)
## 1.7.3 (Upcoming)
### IMPROVEMENTS:
Major refactor: Extracted a majority of HashiCorp-maintained and community plugins from the Packer Core repository. They now live in their own multi-component plugin repositiores. The following repositories have been created, and their components have been deleted from the "github.com/hashicorp/packer" repository.
* "github.com/hashicorp/packer-plugin-alicloud" [GH-10932]
* "github.com/hashicorp/packer-plugin-amazon" [GH-10800]
* "github.com/hashicorp/packer-plugin-ansible" [GH-10912]
* "github.com/hashicorp/packer-plugin-azure" [GH-10979]
* "github.com/hashicorp/packer-plugin-chef" [GH-10921]
* "github.com/hashicorp/packer-plugin-cloudstack" [GH-10934]
* "github.com/hashicorp/packer-plugin-converge" [GH-10956]
* "github.com/hashicorp/packer-plugin-digitalocean" [GH-10961]
* "github.com/hashicorp/packer-plugin-docker" [GH-10695]
* "github.com/hashicorp/packer-plugin-googlecompute" [GH-10890]
* "github.com/hashicorp/packer-plugin-hcloud" [GH-10966]
* "github.com/hashicorp/packer-plugin-hyperone" [GH-10949]
* "github.com/hashicorp/packer-plugin-hyperv" [GH-10949]
* "github.com/hashicorp/packer-plugin-inspec"
* "github.com/hashicorp/packer-plugin-ionos-cloud"
* "github.com/hashicorp/packer-plugin-jdcloud" [GH-10946]
* "github.com/hashicorp/packer-plugin-linode" [GH-10947]
* "github.com/hashicorp/packer-plugin-lxc" [GH-10965]
* "github.com/hashicorp/packer-plugin-lxd" [GH-10965]
* "github.com/hashicorp/packer-plugin-ncloud" [GH-10937]
* "github.com/hashicorp/packer-plugin-openstack" [GH-10933]
* "github.com/hashicorp/packer-plugin-oracle" [GH-10962]
* "github.com/hashicorp/packer-plugin-outscale" [GH-10941]
* "github.com/hashicorp/packer-plugin-parallels" [GH-10936]
* "github.com/hashicorp/packer-plugin-proxmox" [GH-10930]
* "github.com/hashicorp/packer-plugin-puppet" [GH-10943]
* "github.com/hashicorp/packer-plugin-qemu" [GH-10929]
* "github.com/hashicorp/packer-plugin-salt"
* "github.com/hashicorp/packer-plugin-scaleway" [GH-10939]
* "github.com/hashicorp/packer-plugin-tencentcloud" [GH-10967]
* "github.com/hashicorp/packer-plugin-triton" [GH-10963]
* "github.com/hashicorp/packer-plugin-ucloud" [GH-10953]
* "github.com/hashicorp/packer-plugin-vagrant" [GH-10960]
* "github.com/hashicorp/packer-plugin-virtualbox" [GH-10910]
* "github.com/hashicorp/packer-plugin-vmware" [GH-10920]
* "github.com/hashicorp/packer-plugin-vsphere" [GH-10896]
* "github.com/hashicorp/packer-plugin-yandex" [GH-10970]
_this will not be a backwards-breaking change in v1.7.3_ because the extracted
components are being vendored back into Packer. However, we encourage users to
begin using `packer init` to download and install plugins to get the latest
updates to each plugin, and to prepare for Packer v2.0 when we will stop
vendoring the above plugins into the main Packer binary. The following
components will not be removed from the main packer binary:
* `null` builder
* `file` builder
* `breakpoint` provisioner
* `file` provisioner
* `powershell` provisioner
* `shell` provisioner
* `shell-local` provisioner
* `sleep` provisioner
* `windows-restart` provisioner
* `windows-shell` provisioner
* `artifice` post-processor
* `checksum` post-processor
* `compress` post-processor
* `manifest` post-processor
* `shell-local` post-processor
### Bug Fixes:
* builder/azure: Add `keep_os_disk` parameter to control OS disk deletion
[GH-10045]
* builder/azure: Stop SIG timout from being overridden by PollingDuration
[GH-10816]
* builder/azure: Support shared image gallery storage account type [GH-10863]
* builder/proxmox: Proxmox builder use ipv4 address instead of always ipv6.
[GH-10858]
* core/hcl: Fix Invalid provisioner pause_before panic [GH-10978]
* core: HCL "index" function now actually returns the index of the element
[GH-11008]
* core: Implemented DEFAULT_NAME handling for datasource plugins [GH-11026]
### Enhancements:
* builder/azure: Added custom nicname and osdiskname [GH-10938]
* builder/azure: Add support for shared image gallery storage account type
[GH-10863]
* builder/digitalocean: support ecdsa, ed25519, dsa temporary key types.
[GH-10856]
* builder/ncloud: Support ncloud vpc version [GH-10870]
* post-processor/compress: Add bzip2 support to post-processor [GH-10867]
* post-processor/googlecompute-import: Add Image Storage Locations field
[GH-10864]
* Removed the golang "vendor" directory in favor of go modules. This should not
affect end users. [GH-10916]
## 1.7.2 (April 05, 2021)
### IMPROVEMENTS:

View File

@ -13,15 +13,12 @@
/builder/digitalocean/ @andrewsomething
/website/pages/docs/builders/digitalocean* @andrewsomething
/builder/hyperv/ @taliesins
/website/pages/docs/builders/hyperv* @taliesins
/examples/jdcloud/ @XiaohanLiang @remrain
/builder/jdcloud/ @XiaohanLiang @remrain
/website/pages/docs/builders/jdcloud* @XiaohanLiang @remrain
/builder/linode/ @displague @ctreatma @stvnjacobs @charliekenney23 @phillc
/website/pages/docs/builders/linode* @displague @ctreatma @stvnjacobs @charliekenney23 @phillc
/builder/linode/ @stvnjacobs @charliekenney23 @phillc
/website/pages/docs/builders/linode* @stvnjacobs @charliekenney23 @phillc
/builder/lxc/ @ChrisLundquist
/website/pages/docs/builders/lxc* @ChrisLundquist
@ -43,9 +40,6 @@
/builder/triton/ @sean-
/website/pages/docs/builders/triton* @sean-
/builder/ncloud/ @YuSungDuk
/website/pages/docs/builders/ncloud* @YuSungDuk
/builder/proxmox/ @carlpett
/website/pages/docs/builders/proxmox* @carlpett
@ -55,28 +49,13 @@
/builder/hcloud/ @LKaemmerling
/website/pages/docs/builders/hcloud* @LKaemmerling
/examples/hyperone/ @m110 @gregorybrzeski @ad-m
/builder/hyperone/ @m110 @gregorybrzeski @ad-m
/website/pages/docs/builders/hyperone* @m110 @gregorybrzeski @ad-m
/test/builder_hyperone* @m110 @gregorybrzeski @ad-m
/test/fixtures/builder-hyperone/ @m110 @gregorybrzeski @ad-m
/examples/ucloud/ @shawnmssu
/builder/ucloud/ @shawnmssu
/website/pages/docs/builders/ucloud* @shawnmssu
/builder/yandex/ @GennadySpb @alexanderKhaustov @seukyaso
/website/pages/docs/builders/yandex* @GennadySpb @alexanderKhaustov @seukyaso
/builder/osc/ @marinsalinas @Hakujou
/website/pages/docs/builders/osc* @marinsalinas @Hakujou
/examples/tencentcloud/ @likexian
/builder/tencentcloud/ @likexian
/website/pages/docs/builders/tencentcloud* @likexian
# provisioners
/examples/ansible/ @bhcleek
@ -91,4 +70,3 @@
/post-processor/yandex-export/ @GennadySpb
/post-processor/yandex-import/ @GennadySpb
/post-processor/vsphere-template/ nelson@bennu.cl
/post-processor/ucloud-import/ @shawnmssu

View File

@ -58,8 +58,7 @@ install-gen-deps: ## Install dependencies for code generation
# install` seems to install the last tagged version and we want to install
# master.
@(cd $(TEMPDIR) && GO111MODULE=on go get github.com/alvaroloes/enumer@master)
@go install ./cmd/struct-markdown
@go install ./cmd/mapstructure-to-hcl2
@go install github.com/hashicorp/packer-plugin-sdk/cmd/packer-sdc@latest
install-lint-deps: ## Install linter dependencies
# Pinning golangci-lint at v1.23.8 as --new-from-rev seems to work properly; the latest 1.24.0 has caused issues with memory consumption
@ -147,7 +146,7 @@ testacc: # install-build-deps generate ## Run acceptance tests
PACKER_ACC=1 go test -count $(COUNT) -v $(TEST) $(TESTARGS) -timeout=120m
testrace: mode-check vet ## Test with race detection enabled
@GO111MODULE=off go test -count $(COUNT) -race $(TEST) $(TESTARGS) -timeout=3m -p=8
@go test -count $(COUNT) -race $(TEST) $(TESTARGS) -timeout=3m -p=8
# Runs code coverage and open a html page with report
cover:
@ -155,13 +154,6 @@ cover:
go tool cover -html=coverage.out
rm coverage.out
check-vendor-vs-mod: ## Check that go modules and vendored code are on par
@GO111MODULE=on go mod vendor
@git diff --exit-code --ignore-space-change --ignore-space-at-eol -- vendor ; if [ $$? -eq 1 ]; then \
echo "ERROR: vendor dir is not on par with go modules definition." && \
exit 1; \
fi
vet: ## Vet Go code
@go vet $(VET) ; if [ $$? -eq 1 ]; then \
echo "ERROR: Vet found problems in the code."; \

View File

@ -4,14 +4,13 @@
[![Discuss](https://img.shields.io/badge/discuss-packer-3d89ff?style=flat)](https://discuss.hashicorp.com/c/packer)
[![PkgGoDev](https://pkg.go.dev/badge/github.com/hashicorp/packer)](https://pkg.go.dev/github.com/hashicorp/packer)
[![GoReportCard][report-badge]][report]
[![codecov](https://codecov.io/gh/hashicorp/packer/branch/master/graph/badge.svg)](https://codecov.io/gh/hashicorp/packer)
[circleci-badge]: https://circleci.com/gh/hashicorp/packer.svg?style=svg
[circleci]: https://app.circleci.com/pipelines/github/hashicorp/packer
[appveyor-badge]: https://ci.appveyor.com/api/projects/status/miavlgnp989e5obc/branch/master?svg=true
[godoc-badge]: https://godoc.org/github.com/hashicorp/packer?status.svg
[godoc]: https://godoc.org/github.com/hashicorp/packer
[report-badge]: https://goreportcard.com/badge/github.com/hashicorp/packer
[godoc]: https://godoc.org/github.com/hashicorp/packer
[report-badge]: https://goreportcard.com/badge/github.com/hashicorp/packer
[report]: https://goreportcard.com/report/github.com/hashicorp/packer
<p align="center" style="text-align:center;">
@ -48,7 +47,7 @@ yourself](https://github.com/hashicorp/packer/blob/master/.github/CONTRIBUTING.m
After Packer is installed, create your first template, which tells Packer
what platforms to build images for and how you want to build them. In our
case, we'll create a simple AMI that has Redis pre-installed.
case, we'll create a simple AMI that has Redis pre-installed.
Save this file as `quick-start.pkr.hcl`. Export your AWS credentials as the
`AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` environment variables.

View File

@ -0,0 +1,71 @@
// component_acc_test.go should contain acceptance tests for plugin components
// to make sure all component types can be discovered and started.
package plugin
import (
_ "embed"
"fmt"
"io/ioutil"
"os"
"os/exec"
"testing"
amazonacc "github.com/hashicorp/packer-plugin-amazon/builder/ebs/acceptance"
"github.com/hashicorp/packer-plugin-sdk/acctest"
"github.com/hashicorp/packer/hcl2template/addrs"
)
//go:embed test-fixtures/basic-amazon-ami-datasource.pkr.hcl
var basicAmazonAmiDatasourceHCL2Template string
func TestAccInitAndBuildBasicAmazonAmiDatasource(t *testing.T) {
plugin := addrs.Plugin{
Hostname: "github.com",
Namespace: "hashicorp",
Type: "amazon",
}
testCase := &acctest.PluginTestCase{
Name: "amazon-ami_basic_datasource_test",
Setup: func() error {
return cleanupPluginInstallation(plugin)
},
Teardown: func() error {
helper := amazonacc.AWSHelper{
Region: "us-west-2",
AMIName: "packer-amazon-ami-test",
}
return helper.CleanUpAmi()
},
Template: basicAmazonAmiDatasourceHCL2Template,
Type: "amazon-ami",
Init: true,
CheckInit: func(initCommand *exec.Cmd, logfile string) error {
if initCommand.ProcessState != nil {
if initCommand.ProcessState.ExitCode() != 0 {
return fmt.Errorf("Bad exit code. Logfile: %s", logfile)
}
}
logs, err := os.Open(logfile)
if err != nil {
return fmt.Errorf("Unable find %s", logfile)
}
defer logs.Close()
logsBytes, err := ioutil.ReadAll(logs)
if err != nil {
return fmt.Errorf("Unable to read %s", logfile)
}
initOutput := string(logsBytes)
return checkPluginInstallation(initOutput, plugin)
},
Check: func(buildCommand *exec.Cmd, logfile string) error {
if buildCommand.ProcessState != nil {
if buildCommand.ProcessState.ExitCode() != 0 {
return fmt.Errorf("Bad exit code. Logfile: %s", logfile)
}
}
return nil
},
}
acctest.TestPlugin(t, testCase)
}

View File

@ -0,0 +1,112 @@
// plugin_acc_test.go should contain acceptance tests for features related to
// installing, discovering and running plugins.
package plugin
import (
_ "embed"
"fmt"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
"regexp"
"testing"
amazonacc "github.com/hashicorp/packer-plugin-amazon/builder/ebs/acceptance"
"github.com/hashicorp/packer-plugin-sdk/acctest"
"github.com/hashicorp/packer-plugin-sdk/acctest/testutils"
"github.com/hashicorp/packer/hcl2template/addrs"
"github.com/mitchellh/go-homedir"
)
//go:embed test-fixtures/basic-amazon-ebs.pkr.hcl
var basicAmazonEbsHCL2Template string
func TestAccInitAndBuildBasicAmazonEbs(t *testing.T) {
plugin := addrs.Plugin{
Hostname: "github.com",
Namespace: "hashicorp",
Type: "amazon",
}
testCase := &acctest.PluginTestCase{
Name: "amazon-ebs_basic_plugin_init_and_build_test",
Setup: func() error {
return cleanupPluginInstallation(plugin)
},
Teardown: func() error {
helper := amazonacc.AWSHelper{
Region: "us-east-1",
AMIName: "packer-plugin-amazon-ebs-test",
}
return helper.CleanUpAmi()
},
Template: basicAmazonEbsHCL2Template,
Type: "amazon-ebs",
Init: true,
CheckInit: func(initCommand *exec.Cmd, logfile string) error {
if initCommand.ProcessState != nil {
if initCommand.ProcessState.ExitCode() != 0 {
return fmt.Errorf("Bad exit code. Logfile: %s", logfile)
}
}
logs, err := os.Open(logfile)
if err != nil {
return fmt.Errorf("Unable find %s", logfile)
}
defer logs.Close()
logsBytes, err := ioutil.ReadAll(logs)
if err != nil {
return fmt.Errorf("Unable to read %s", logfile)
}
initOutput := string(logsBytes)
return checkPluginInstallation(initOutput, plugin)
},
Check: func(buildCommand *exec.Cmd, logfile string) error {
if buildCommand.ProcessState != nil {
if buildCommand.ProcessState.ExitCode() != 0 {
return fmt.Errorf("Bad exit code. Logfile: %s", logfile)
}
}
return nil
},
}
acctest.TestPlugin(t, testCase)
}
func cleanupPluginInstallation(plugin addrs.Plugin) error {
home, err := homedir.Dir()
if err != nil {
return err
}
pluginPath := filepath.Join(home,
".packer.d",
"plugins",
plugin.Hostname,
plugin.Namespace,
plugin.Type)
testutils.CleanupFiles(pluginPath)
return nil
}
func checkPluginInstallation(initOutput string, plugin addrs.Plugin) error {
expectedInitLog := "Installed plugin " + plugin.String()
if matched, _ := regexp.MatchString(expectedInitLog+".*", initOutput); !matched {
return fmt.Errorf("logs doesn't contain expected foo value %q", initOutput)
}
home, err := homedir.Dir()
if err != nil {
return err
}
pluginPath := filepath.Join(home,
".packer.d",
"plugins",
plugin.Hostname,
plugin.Namespace,
plugin.Type)
if !testutils.FileExists(pluginPath) {
return fmt.Errorf("%s plugin installation not found", plugin.String())
}
return nil
}

View File

@ -0,0 +1,33 @@
packer {
required_plugins {
amazon = {
version = ">= 0.0.1"
source = "github.com/hashicorp/amazon"
}
}
}
data "amazon-ami" "test" {
filters = {
name = "ubuntu/images/*ubuntu-xenial-16.04-amd64-server-*"
root-device-type = "ebs"
virtualization-type = "hvm"
}
most_recent = true
owners = ["099720109477"]
}
source "amazon-ebs" "basic-example" {
region = "us-west-2"
source_ami = data.amazon-ami.test.id
ami_name = "packer-amazon-ami-test"
communicator = "ssh"
instance_type = "t2.micro"
ssh_username = "ubuntu"
}
build {
sources = [
"source.amazon-ebs.basic-example"
]
}

View File

@ -0,0 +1,20 @@
packer {
required_plugins {
amazon = {
version = ">= 0.0.1"
source = "github.com/hashicorp/amazon"
}
}
}
source "amazon-ebs" "basic-test" {
region = "us-east-1"
instance_type = "m3.medium"
source_ami = "ami-76b2a71e"
ssh_username = "ubuntu"
ami_name = "packer-plugin-amazon-ebs-test"
}
build {
sources = ["source.amazon-ebs.basic-test"]
}

72
acctest/testing_test.go Normal file
View File

@ -0,0 +1,72 @@
package acctest
import (
"os"
"testing"
)
func init() {
testTesting = true
if err := os.Setenv(TestEnvVar, "1"); err != nil {
panic(err)
}
}
func TestTest_noEnv(t *testing.T) {
// Unset the variable
if err := os.Setenv(TestEnvVar, ""); err != nil {
t.Fatalf("err: %s", err)
}
defer os.Setenv(TestEnvVar, "1")
mt := new(mockT)
Test(mt, TestCase{})
if !mt.SkipCalled {
t.Fatal("skip not called")
}
}
func TestTest_preCheck(t *testing.T) {
called := false
mt := new(mockT)
Test(mt, TestCase{
PreCheck: func() { called = true },
})
if !called {
t.Fatal("precheck should be called")
}
}
// mockT implements TestT for testing
type mockT struct {
ErrorCalled bool
ErrorArgs []interface{}
FatalCalled bool
FatalArgs []interface{}
SkipCalled bool
SkipArgs []interface{}
f bool
}
func (t *mockT) Error(args ...interface{}) {
t.ErrorCalled = true
t.ErrorArgs = args
t.f = true
}
func (t *mockT) Fatal(args ...interface{}) {
t.FatalCalled = true
t.FatalArgs = args
t.f = true
}
func (t *mockT) Skip(args ...interface{}) {
t.SkipCalled = true
t.SkipArgs = args
t.f = true
}

View File

@ -1,226 +0,0 @@
//go:generate struct-markdown
package ecs
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"runtime"
"time"
"github.com/aliyun/alibaba-cloud-sdk-go/services/ecs"
"github.com/hashicorp/packer-plugin-sdk/template/interpolate"
"github.com/hashicorp/packer/builder/alicloud/version"
"github.com/mitchellh/go-homedir"
)
// Config of alicloud
type AlicloudAccessConfig struct {
// Alicloud access key must be provided unless `profile` is set, but it can
// also be sourced from the `ALICLOUD_ACCESS_KEY` environment variable.
AlicloudAccessKey string `mapstructure:"access_key" required:"true"`
// Alicloud secret key must be provided unless `profile` is set, but it can
// also be sourced from the `ALICLOUD_SECRET_KEY` environment variable.
AlicloudSecretKey string `mapstructure:"secret_key" required:"true"`
// Alicloud region must be provided unless `profile` is set, but it can
// also be sourced from the `ALICLOUD_REGION` environment variable.
AlicloudRegion string `mapstructure:"region" required:"true"`
// The region validation can be skipped if this value is true, the default
// value is false.
AlicloudSkipValidation bool `mapstructure:"skip_region_validation" required:"false"`
// The image validation can be skipped if this value is true, the default
// value is false.
AlicloudSkipImageValidation bool `mapstructure:"skip_image_validation" required:"false"`
// Alicloud profile must be set unless `access_key` is set; it can also be
// sourced from the `ALICLOUD_PROFILE` environment variable.
AlicloudProfile string `mapstructure:"profile" required:"false"`
// Alicloud shared credentials file path. If this file exists, access and
// secret keys will be read from this file.
AlicloudSharedCredentialsFile string `mapstructure:"shared_credentials_file" required:"false"`
// STS access token, can be set through template or by exporting as
// environment variable such as `export SECURITY_TOKEN=value`.
SecurityToken string `mapstructure:"security_token" required:"false"`
client *ClientWrapper
}
const Packer = "HashiCorp-Packer"
const DefaultRequestReadTimeout = 10 * time.Second
// Client for AlicloudClient
func (c *AlicloudAccessConfig) Client() (*ClientWrapper, error) {
if c.client != nil {
return c.client, nil
}
if c.SecurityToken == "" {
c.SecurityToken = os.Getenv("SECURITY_TOKEN")
}
var getProviderConfig = func(str string, key string) string {
value, err := getConfigFromProfile(c, key)
if err == nil && value != nil {
str = value.(string)
}
return str
}
if c.AlicloudAccessKey == "" || c.AlicloudSecretKey == "" {
c.AlicloudAccessKey = getProviderConfig(c.AlicloudAccessKey, "access_key_id")
c.AlicloudSecretKey = getProviderConfig(c.AlicloudSecretKey, "access_key_secret")
c.AlicloudRegion = getProviderConfig(c.AlicloudRegion, "region_id")
c.SecurityToken = getProviderConfig(c.SecurityToken, "sts_token")
}
client, err := ecs.NewClientWithStsToken(c.AlicloudRegion, c.AlicloudAccessKey, c.AlicloudSecretKey, c.SecurityToken)
if err != nil {
return nil, err
}
client.AppendUserAgent(Packer, version.AlicloudPluginVersion.FormattedVersion())
client.SetReadTimeout(DefaultRequestReadTimeout)
c.client = &ClientWrapper{client}
return c.client, nil
}
func (c *AlicloudAccessConfig) Prepare(ctx *interpolate.Context) []error {
var errs []error
if err := c.Config(); err != nil {
errs = append(errs, err)
}
if c.AlicloudRegion == "" {
c.AlicloudRegion = os.Getenv("ALICLOUD_REGION")
}
if c.AlicloudRegion == "" {
errs = append(errs, fmt.Errorf("region option or ALICLOUD_REGION must be provided in template file or environment variables."))
}
if len(errs) > 0 {
return errs
}
return nil
}
func (c *AlicloudAccessConfig) Config() error {
if c.AlicloudAccessKey == "" {
c.AlicloudAccessKey = os.Getenv("ALICLOUD_ACCESS_KEY")
}
if c.AlicloudSecretKey == "" {
c.AlicloudSecretKey = os.Getenv("ALICLOUD_SECRET_KEY")
}
if c.AlicloudProfile == "" {
c.AlicloudProfile = os.Getenv("ALICLOUD_PROFILE")
}
if c.AlicloudSharedCredentialsFile == "" {
c.AlicloudSharedCredentialsFile = os.Getenv("ALICLOUD_SHARED_CREDENTIALS_FILE")
}
if (c.AlicloudAccessKey == "" || c.AlicloudSecretKey == "") && c.AlicloudProfile == "" {
return fmt.Errorf("ALICLOUD_ACCESS_KEY and ALICLOUD_SECRET_KEY must be set in template file or environment variables.")
}
return nil
}
func (c *AlicloudAccessConfig) ValidateRegion(region string) error {
supportedRegions, err := c.getSupportedRegions()
if err != nil {
return err
}
for _, supportedRegion := range supportedRegions {
if region == supportedRegion {
return nil
}
}
return fmt.Errorf("Not a valid alicloud region: %s", region)
}
func (c *AlicloudAccessConfig) getSupportedRegions() ([]string, error) {
client, err := c.Client()
if err != nil {
return nil, err
}
regionsRequest := ecs.CreateDescribeRegionsRequest()
regionsResponse, err := client.DescribeRegions(regionsRequest)
if err != nil {
return nil, err
}
validRegions := make([]string, len(regionsResponse.Regions.Region))
for _, valid := range regionsResponse.Regions.Region {
validRegions = append(validRegions, valid.RegionId)
}
return validRegions, nil
}
func getConfigFromProfile(c *AlicloudAccessConfig, ProfileKey string) (interface{}, error) {
providerConfig := make(map[string]interface{})
current := c.AlicloudProfile
if current != "" {
profilePath, err := homedir.Expand(c.AlicloudSharedCredentialsFile)
if err != nil {
return nil, err
}
if profilePath == "" {
profilePath = fmt.Sprintf("%s/.aliyun/config.json", os.Getenv("HOME"))
if runtime.GOOS == "windows" {
profilePath = fmt.Sprintf("%s/.aliyun/config.json", os.Getenv("USERPROFILE"))
}
}
_, err = os.Stat(profilePath)
if !os.IsNotExist(err) {
data, err := ioutil.ReadFile(profilePath)
if err != nil {
return nil, err
}
config := map[string]interface{}{}
err = json.Unmarshal(data, &config)
if err != nil {
return nil, err
}
for _, v := range config["profiles"].([]interface{}) {
if current == v.(map[string]interface{})["name"] {
providerConfig = v.(map[string]interface{})
}
}
}
}
mode := ""
if v, ok := providerConfig["mode"]; ok {
mode = v.(string)
} else {
return v, nil
}
switch ProfileKey {
case "access_key_id", "access_key_secret":
if mode == "EcsRamRole" {
return "", nil
}
case "ram_role_name":
if mode != "EcsRamRole" {
return "", nil
}
case "sts_token":
if mode != "StsToken" {
return "", nil
}
case "ram_role_arn", "ram_session_name":
if mode != "RamRoleArn" {
return "", nil
}
case "expired_seconds":
if mode != "RamRoleArn" {
return float64(0), nil
}
}
return providerConfig[ProfileKey], nil
}

View File

@ -1,52 +0,0 @@
package ecs
import (
"os"
"testing"
)
func testAlicloudAccessConfig() *AlicloudAccessConfig {
return &AlicloudAccessConfig{
AlicloudAccessKey: "ak",
AlicloudSecretKey: "acs",
}
}
func TestAlicloudAccessConfigPrepareRegion(t *testing.T) {
c := testAlicloudAccessConfig()
c.AlicloudRegion = ""
if err := c.Prepare(nil); err == nil {
t.Fatalf("should have err")
}
c.AlicloudRegion = "cn-beijing"
if err := c.Prepare(nil); err != nil {
t.Fatalf("shouldn't have err: %s", err)
}
os.Setenv("ALICLOUD_REGION", "cn-hangzhou")
c.AlicloudRegion = ""
if err := c.Prepare(nil); err != nil {
t.Fatalf("shouldn't have err: %s", err)
}
c.AlicloudAccessKey = ""
if err := c.Prepare(nil); err == nil {
t.Fatalf("should have err")
}
c.AlicloudProfile = "default"
if err := c.Prepare(nil); err != nil {
t.Fatalf("shouldn't have err: %s", err)
}
c.AlicloudProfile = ""
os.Setenv("ALICLOUD_PROFILE", "default")
if err := c.Prepare(nil); err != nil {
t.Fatalf("shouldn't have err: %s", err)
}
c.AlicloudSkipValidation = false
}

View File

@ -1,186 +0,0 @@
package ecs
import (
"fmt"
"log"
"sort"
"strings"
"github.com/aliyun/alibaba-cloud-sdk-go/services/ecs"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
)
type Artifact struct {
// A map of regions to alicloud image IDs.
AlicloudImages map[string]string
// BuilderId is the unique ID for the builder that created this alicloud image
BuilderIdValue string
// Alcloud connection for performing API stuff.
Client *ClientWrapper
}
func (a *Artifact) BuilderId() string {
return a.BuilderIdValue
}
func (*Artifact) Files() []string {
// We have no files
return nil
}
func (a *Artifact) Id() string {
parts := make([]string, 0, len(a.AlicloudImages))
for region, ecsImageId := range a.AlicloudImages {
parts = append(parts, fmt.Sprintf("%s:%s", region, ecsImageId))
}
sort.Strings(parts)
return strings.Join(parts, ",")
}
func (a *Artifact) String() string {
alicloudImageStrings := make([]string, 0, len(a.AlicloudImages))
for region, id := range a.AlicloudImages {
single := fmt.Sprintf("%s: %s", region, id)
alicloudImageStrings = append(alicloudImageStrings, single)
}
sort.Strings(alicloudImageStrings)
return fmt.Sprintf("Alicloud images were created:\n\n%s", strings.Join(alicloudImageStrings, "\n"))
}
func (a *Artifact) State(name string) interface{} {
switch name {
case "atlas.artifact.metadata":
return a.stateAtlasMetadata()
default:
return nil
}
}
func (a *Artifact) Destroy() error {
errors := make([]error, 0)
copyingImages := make(map[string]string, len(a.AlicloudImages))
sourceImage := make(map[string]*ecs.Image, 1)
for regionId, imageId := range a.AlicloudImages {
describeImagesRequest := ecs.CreateDescribeImagesRequest()
describeImagesRequest.RegionId = regionId
describeImagesRequest.ImageId = imageId
describeImagesRequest.Status = ImageStatusQueried
imagesResponse, err := a.Client.DescribeImages(describeImagesRequest)
if err != nil {
errors = append(errors, err)
}
images := imagesResponse.Images.Image
if len(images) == 0 {
err := fmt.Errorf("Error retrieving details for alicloud image(%s), no alicloud images found", imageId)
errors = append(errors, err)
continue
}
if images[0].IsCopied && images[0].Status != ImageStatusAvailable {
copyingImages[regionId] = imageId
} else {
sourceImage[regionId] = &images[0]
}
}
for regionId, imageId := range copyingImages {
log.Printf("Cancel copying alicloud image (%s) from region (%s)", imageId, regionId)
errs := a.unsharedAccountsOnImages(regionId, imageId)
if errs != nil {
errors = append(errors, errs...)
}
cancelImageCopyRequest := ecs.CreateCancelCopyImageRequest()
cancelImageCopyRequest.RegionId = regionId
cancelImageCopyRequest.ImageId = imageId
if _, err := a.Client.CancelCopyImage(cancelImageCopyRequest); err != nil {
errors = append(errors, err)
}
}
for regionId, image := range sourceImage {
imageId := image.ImageId
log.Printf("Delete alicloud image (%s) from region (%s)", imageId, regionId)
errs := a.unsharedAccountsOnImages(regionId, imageId)
if errs != nil {
errors = append(errors, errs...)
}
deleteImageRequest := ecs.CreateDeleteImageRequest()
deleteImageRequest.RegionId = regionId
deleteImageRequest.ImageId = imageId
if _, err := a.Client.DeleteImage(deleteImageRequest); err != nil {
errors = append(errors, err)
}
//Delete the snapshot of this images
for _, diskDevices := range image.DiskDeviceMappings.DiskDeviceMapping {
deleteSnapshotRequest := ecs.CreateDeleteSnapshotRequest()
deleteSnapshotRequest.SnapshotId = diskDevices.SnapshotId
_, err := a.Client.DeleteSnapshot(deleteSnapshotRequest)
if err != nil {
errors = append(errors, err)
}
}
}
if len(errors) > 0 {
if len(errors) == 1 {
return errors[0]
} else {
return &packersdk.MultiError{Errors: errors}
}
}
return nil
}
func (a *Artifact) unsharedAccountsOnImages(regionId string, imageId string) []error {
var errors []error
describeImageShareRequest := ecs.CreateDescribeImageSharePermissionRequest()
describeImageShareRequest.RegionId = regionId
describeImageShareRequest.ImageId = imageId
imageShareResponse, err := a.Client.DescribeImageSharePermission(describeImageShareRequest)
if err != nil {
errors = append(errors, err)
return errors
}
accountsNumber := len(imageShareResponse.Accounts.Account)
if accountsNumber > 0 {
accounts := make([]string, accountsNumber)
for index, account := range imageShareResponse.Accounts.Account {
accounts[index] = account.AliyunId
}
modifyImageShareRequest := ecs.CreateModifyImageSharePermissionRequest()
modifyImageShareRequest.RegionId = regionId
modifyImageShareRequest.ImageId = imageId
modifyImageShareRequest.RemoveAccount = &accounts
_, err := a.Client.ModifyImageSharePermission(modifyImageShareRequest)
if err != nil {
errors = append(errors, err)
}
}
return errors
}
func (a *Artifact) stateAtlasMetadata() interface{} {
metadata := make(map[string]string)
for region, imageId := range a.AlicloudImages {
k := fmt.Sprintf("region.%s", region)
metadata[k] = imageId
}
return metadata
}

View File

@ -1,47 +0,0 @@
package ecs
import (
"reflect"
"testing"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
)
func TestArtifact_Impl(t *testing.T) {
var _ packersdk.Artifact = new(Artifact)
}
func TestArtifactId(t *testing.T) {
expected := `east:foo,west:bar`
ecsImages := make(map[string]string)
ecsImages["east"] = "foo"
ecsImages["west"] = "bar"
a := &Artifact{
AlicloudImages: ecsImages,
}
result := a.Id()
if result != expected {
t.Fatalf("bad: %s", result)
}
}
func TestArtifactState_atlasMetadata(t *testing.T) {
a := &Artifact{
AlicloudImages: map[string]string{
"east": "foo",
"west": "bar",
},
}
actual := a.State("atlas.artifact.metadata")
expected := map[string]string{
"region.east": "foo",
"region.west": "bar",
}
if !reflect.DeepEqual(actual, expected) {
t.Fatalf("bad: %#v", actual)
}
}

View File

@ -1,270 +0,0 @@
//go:generate mapstructure-to-hcl2 -type Config,AlicloudDiskDevice
// The alicloud contains a packersdk.Builder implementation that
// builds ecs images for alicloud.
package ecs
import (
"context"
"fmt"
"github.com/hashicorp/hcl/v2/hcldec"
"github.com/hashicorp/packer-plugin-sdk/common"
"github.com/hashicorp/packer-plugin-sdk/communicator"
"github.com/hashicorp/packer-plugin-sdk/multistep"
"github.com/hashicorp/packer-plugin-sdk/multistep/commonsteps"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
"github.com/hashicorp/packer-plugin-sdk/template/config"
"github.com/hashicorp/packer-plugin-sdk/template/interpolate"
)
// The unique ID for this builder
const BuilderId = "alibaba.alicloud"
type Config struct {
common.PackerConfig `mapstructure:",squash"`
AlicloudAccessConfig `mapstructure:",squash"`
AlicloudImageConfig `mapstructure:",squash"`
RunConfig `mapstructure:",squash"`
ctx interpolate.Context
}
type Builder struct {
config Config
runner multistep.Runner
}
type InstanceNetWork string
const (
ALICLOUD_DEFAULT_SHORT_TIMEOUT = 180
ALICLOUD_DEFAULT_TIMEOUT = 1800
ALICLOUD_DEFAULT_LONG_TIMEOUT = 3600
)
func (b *Builder) ConfigSpec() hcldec.ObjectSpec { return b.config.FlatMapstructure().HCL2Spec() }
func (b *Builder) Prepare(raws ...interface{}) ([]string, []string, error) {
err := config.Decode(&b.config, &config.DecodeOpts{
PluginType: BuilderId,
Interpolate: true,
InterpolateContext: &b.config.ctx,
InterpolateFilter: &interpolate.RenderFilter{
Exclude: []string{
"run_command",
},
},
}, raws...)
b.config.ctx.EnableEnv = true
if err != nil {
return nil, nil, err
}
if b.config.PackerConfig.PackerForce {
b.config.AlicloudImageForceDelete = true
b.config.AlicloudImageForceDeleteSnapshots = true
}
// Accumulate any errors
var errs *packersdk.MultiError
errs = packersdk.MultiErrorAppend(errs, b.config.AlicloudAccessConfig.Prepare(&b.config.ctx)...)
errs = packersdk.MultiErrorAppend(errs, b.config.AlicloudImageConfig.Prepare(&b.config.ctx)...)
errs = packersdk.MultiErrorAppend(errs, b.config.RunConfig.Prepare(&b.config.ctx)...)
if errs != nil && len(errs.Errors) > 0 {
return nil, nil, errs
}
packersdk.LogSecretFilter.Set(b.config.AlicloudAccessKey, b.config.AlicloudSecretKey)
return nil, nil, nil
}
func (b *Builder) Run(ctx context.Context, ui packersdk.Ui, hook packersdk.Hook) (packersdk.Artifact, error) {
client, err := b.config.Client()
if err != nil {
return nil, err
}
state := new(multistep.BasicStateBag)
state.Put("config", &b.config)
state.Put("client", client)
state.Put("hook", hook)
state.Put("ui", ui)
state.Put("networktype", b.chooseNetworkType())
var steps []multistep.Step
// Build the steps
steps = []multistep.Step{
&stepPreValidate{
AlicloudDestImageName: b.config.AlicloudImageName,
ForceDelete: b.config.AlicloudImageForceDelete,
},
&stepCheckAlicloudSourceImage{
SourceECSImageId: b.config.AlicloudSourceImage,
},
&stepConfigAlicloudKeyPair{
Debug: b.config.PackerDebug,
Comm: &b.config.Comm,
DebugKeyPath: fmt.Sprintf("ecs_%s.pem", b.config.PackerBuildName),
RegionId: b.config.AlicloudRegion,
},
}
if b.chooseNetworkType() == InstanceNetworkVpc {
steps = append(steps,
&stepConfigAlicloudVPC{
VpcId: b.config.VpcId,
CidrBlock: b.config.CidrBlock,
VpcName: b.config.VpcName,
},
&stepConfigAlicloudVSwitch{
VSwitchId: b.config.VSwitchId,
ZoneId: b.config.ZoneId,
CidrBlock: b.config.CidrBlock,
VSwitchName: b.config.VSwitchName,
})
}
steps = append(steps,
&stepConfigAlicloudSecurityGroup{
SecurityGroupId: b.config.SecurityGroupId,
SecurityGroupName: b.config.SecurityGroupId,
RegionId: b.config.AlicloudRegion,
},
&stepCreateAlicloudInstance{
IOOptimized: b.config.IOOptimized,
InstanceType: b.config.InstanceType,
UserData: b.config.UserData,
UserDataFile: b.config.UserDataFile,
RamRoleName: b.config.RamRoleName,
RegionId: b.config.AlicloudRegion,
InternetChargeType: b.config.InternetChargeType,
InternetMaxBandwidthOut: b.config.InternetMaxBandwidthOut,
InstanceName: b.config.InstanceName,
ZoneId: b.config.ZoneId,
})
if b.chooseNetworkType() == InstanceNetworkVpc {
steps = append(steps, &stepConfigAlicloudEIP{
AssociatePublicIpAddress: b.config.AssociatePublicIpAddress,
RegionId: b.config.AlicloudRegion,
InternetChargeType: b.config.InternetChargeType,
InternetMaxBandwidthOut: b.config.InternetMaxBandwidthOut,
SSHPrivateIp: b.config.SSHPrivateIp,
})
} else {
steps = append(steps, &stepConfigAlicloudPublicIP{
RegionId: b.config.AlicloudRegion,
SSHPrivateIp: b.config.SSHPrivateIp,
})
}
steps = append(steps,
&stepAttachKeyPair{},
&stepRunAlicloudInstance{},
&communicator.StepConnect{
Config: &b.config.RunConfig.Comm,
Host: SSHHost(
client,
b.config.SSHPrivateIp),
SSHConfig: b.config.RunConfig.Comm.SSHConfigFunc(),
},
&commonsteps.StepProvision{},
&commonsteps.StepCleanupTempKeys{
Comm: &b.config.RunConfig.Comm,
},
&stepStopAlicloudInstance{
ForceStop: b.config.ForceStopInstance,
DisableStop: b.config.DisableStopInstance,
},
&stepDeleteAlicloudImageSnapshots{
AlicloudImageForceDeleteSnapshots: b.config.AlicloudImageForceDeleteSnapshots,
AlicloudImageForceDelete: b.config.AlicloudImageForceDelete,
AlicloudImageName: b.config.AlicloudImageName,
AlicloudImageDestinationRegions: b.config.AlicloudImageConfig.AlicloudImageDestinationRegions,
AlicloudImageDestinationNames: b.config.AlicloudImageConfig.AlicloudImageDestinationNames,
})
if b.config.AlicloudImageIgnoreDataDisks {
steps = append(steps, &stepCreateAlicloudSnapshot{
WaitSnapshotReadyTimeout: b.getSnapshotReadyTimeout(),
})
}
steps = append(steps,
&stepCreateAlicloudImage{
AlicloudImageIgnoreDataDisks: b.config.AlicloudImageIgnoreDataDisks,
WaitSnapshotReadyTimeout: b.getSnapshotReadyTimeout(),
},
&stepCreateTags{
Tags: b.config.AlicloudImageTags,
},
&stepRegionCopyAlicloudImage{
AlicloudImageDestinationRegions: b.config.AlicloudImageDestinationRegions,
AlicloudImageDestinationNames: b.config.AlicloudImageDestinationNames,
RegionId: b.config.AlicloudRegion,
},
&stepShareAlicloudImage{
AlicloudImageShareAccounts: b.config.AlicloudImageShareAccounts,
AlicloudImageUNShareAccounts: b.config.AlicloudImageUNShareAccounts,
RegionId: b.config.AlicloudRegion,
})
// Run!
b.runner = commonsteps.NewRunner(steps, b.config.PackerConfig, ui)
b.runner.Run(ctx, state)
// If there was an error, return that
if rawErr, ok := state.GetOk("error"); ok {
return nil, rawErr.(error)
}
// If there are no ECS images, then just return
if _, ok := state.GetOk("alicloudimages"); !ok {
return nil, nil
}
// Build the artifact and return it
artifact := &Artifact{
AlicloudImages: state.Get("alicloudimages").(map[string]string),
BuilderIdValue: BuilderId,
Client: client,
}
return artifact, nil
}
func (b *Builder) chooseNetworkType() InstanceNetWork {
if b.isVpcNetRequired() {
return InstanceNetworkVpc
} else {
return InstanceNetworkClassic
}
}
func (b *Builder) isVpcNetRequired() bool {
// UserData and KeyPair only works in VPC
return b.isVpcSpecified() || b.isUserDataNeeded() || b.isKeyPairNeeded()
}
func (b *Builder) isVpcSpecified() bool {
return b.config.VpcId != "" || b.config.VSwitchId != ""
}
func (b *Builder) isUserDataNeeded() bool {
// Public key setup requires userdata
if b.config.RunConfig.Comm.SSHPrivateKeyFile != "" {
return true
}
return b.config.UserData != "" || b.config.UserDataFile != ""
}
func (b *Builder) isKeyPairNeeded() bool {
return b.config.Comm.SSHKeyPairName != "" || b.config.Comm.SSHTemporaryKeyPairName != ""
}
func (b *Builder) getSnapshotReadyTimeout() int {
if b.config.WaitSnapshotReadyTimeout > 0 {
return b.config.WaitSnapshotReadyTimeout
}
return ALICLOUD_DEFAULT_LONG_TIMEOUT
}

View File

@ -1,275 +0,0 @@
// Code generated by "mapstructure-to-hcl2 -type Config,AlicloudDiskDevice"; DO NOT EDIT.
package ecs
import (
"github.com/hashicorp/hcl/v2/hcldec"
"github.com/hashicorp/packer-plugin-sdk/template/config"
"github.com/zclconf/go-cty/cty"
)
// FlatAlicloudDiskDevice is an auto-generated flat version of AlicloudDiskDevice.
// Where the contents of a field with a `mapstructure:,squash` tag are bubbled up.
type FlatAlicloudDiskDevice struct {
DiskName *string `mapstructure:"disk_name" required:"false" cty:"disk_name" hcl:"disk_name"`
DiskCategory *string `mapstructure:"disk_category" required:"false" cty:"disk_category" hcl:"disk_category"`
DiskSize *int `mapstructure:"disk_size" required:"false" cty:"disk_size" hcl:"disk_size"`
SnapshotId *string `mapstructure:"disk_snapshot_id" required:"false" cty:"disk_snapshot_id" hcl:"disk_snapshot_id"`
Description *string `mapstructure:"disk_description" required:"false" cty:"disk_description" hcl:"disk_description"`
DeleteWithInstance *bool `mapstructure:"disk_delete_with_instance" required:"false" cty:"disk_delete_with_instance" hcl:"disk_delete_with_instance"`
Device *string `mapstructure:"disk_device" required:"false" cty:"disk_device" hcl:"disk_device"`
Encrypted *bool `mapstructure:"disk_encrypted" required:"false" cty:"disk_encrypted" hcl:"disk_encrypted"`
}
// FlatMapstructure returns a new FlatAlicloudDiskDevice.
// FlatAlicloudDiskDevice is an auto-generated flat version of AlicloudDiskDevice.
// Where the contents a fields with a `mapstructure:,squash` tag are bubbled up.
func (*AlicloudDiskDevice) FlatMapstructure() interface{ HCL2Spec() map[string]hcldec.Spec } {
return new(FlatAlicloudDiskDevice)
}
// HCL2Spec returns the hcl spec of a AlicloudDiskDevice.
// This spec is used by HCL to read the fields of AlicloudDiskDevice.
// The decoded values from this spec will then be applied to a FlatAlicloudDiskDevice.
func (*FlatAlicloudDiskDevice) HCL2Spec() map[string]hcldec.Spec {
s := map[string]hcldec.Spec{
"disk_name": &hcldec.AttrSpec{Name: "disk_name", Type: cty.String, Required: false},
"disk_category": &hcldec.AttrSpec{Name: "disk_category", Type: cty.String, Required: false},
"disk_size": &hcldec.AttrSpec{Name: "disk_size", Type: cty.Number, Required: false},
"disk_snapshot_id": &hcldec.AttrSpec{Name: "disk_snapshot_id", Type: cty.String, Required: false},
"disk_description": &hcldec.AttrSpec{Name: "disk_description", Type: cty.String, Required: false},
"disk_delete_with_instance": &hcldec.AttrSpec{Name: "disk_delete_with_instance", Type: cty.Bool, Required: false},
"disk_device": &hcldec.AttrSpec{Name: "disk_device", Type: cty.String, Required: false},
"disk_encrypted": &hcldec.AttrSpec{Name: "disk_encrypted", Type: cty.Bool, Required: false},
}
return s
}
// FlatConfig is an auto-generated flat version of Config.
// Where the contents of a field with a `mapstructure:,squash` tag are bubbled up.
type FlatConfig struct {
PackerBuildName *string `mapstructure:"packer_build_name" cty:"packer_build_name" hcl:"packer_build_name"`
PackerBuilderType *string `mapstructure:"packer_builder_type" cty:"packer_builder_type" hcl:"packer_builder_type"`
PackerCoreVersion *string `mapstructure:"packer_core_version" cty:"packer_core_version" hcl:"packer_core_version"`
PackerDebug *bool `mapstructure:"packer_debug" cty:"packer_debug" hcl:"packer_debug"`
PackerForce *bool `mapstructure:"packer_force" cty:"packer_force" hcl:"packer_force"`
PackerOnError *string `mapstructure:"packer_on_error" cty:"packer_on_error" hcl:"packer_on_error"`
PackerUserVars map[string]string `mapstructure:"packer_user_variables" cty:"packer_user_variables" hcl:"packer_user_variables"`
PackerSensitiveVars []string `mapstructure:"packer_sensitive_variables" cty:"packer_sensitive_variables" hcl:"packer_sensitive_variables"`
AlicloudAccessKey *string `mapstructure:"access_key" required:"true" cty:"access_key" hcl:"access_key"`
AlicloudSecretKey *string `mapstructure:"secret_key" required:"true" cty:"secret_key" hcl:"secret_key"`
AlicloudRegion *string `mapstructure:"region" required:"true" cty:"region" hcl:"region"`
AlicloudSkipValidation *bool `mapstructure:"skip_region_validation" required:"false" cty:"skip_region_validation" hcl:"skip_region_validation"`
AlicloudSkipImageValidation *bool `mapstructure:"skip_image_validation" required:"false" cty:"skip_image_validation" hcl:"skip_image_validation"`
AlicloudProfile *string `mapstructure:"profile" required:"false" cty:"profile" hcl:"profile"`
AlicloudSharedCredentialsFile *string `mapstructure:"shared_credentials_file" required:"false" cty:"shared_credentials_file" hcl:"shared_credentials_file"`
SecurityToken *string `mapstructure:"security_token" required:"false" cty:"security_token" hcl:"security_token"`
AlicloudImageName *string `mapstructure:"image_name" required:"true" cty:"image_name" hcl:"image_name"`
AlicloudImageVersion *string `mapstructure:"image_version" required:"false" cty:"image_version" hcl:"image_version"`
AlicloudImageDescription *string `mapstructure:"image_description" required:"false" cty:"image_description" hcl:"image_description"`
AlicloudImageShareAccounts []string `mapstructure:"image_share_account" required:"false" cty:"image_share_account" hcl:"image_share_account"`
AlicloudImageUNShareAccounts []string `mapstructure:"image_unshare_account" cty:"image_unshare_account" hcl:"image_unshare_account"`
AlicloudImageDestinationRegions []string `mapstructure:"image_copy_regions" required:"false" cty:"image_copy_regions" hcl:"image_copy_regions"`
AlicloudImageDestinationNames []string `mapstructure:"image_copy_names" required:"false" cty:"image_copy_names" hcl:"image_copy_names"`
ImageEncrypted *bool `mapstructure:"image_encrypted" required:"false" cty:"image_encrypted" hcl:"image_encrypted"`
AlicloudImageForceDelete *bool `mapstructure:"image_force_delete" required:"false" cty:"image_force_delete" hcl:"image_force_delete"`
AlicloudImageForceDeleteSnapshots *bool `mapstructure:"image_force_delete_snapshots" required:"false" cty:"image_force_delete_snapshots" hcl:"image_force_delete_snapshots"`
AlicloudImageForceDeleteInstances *bool `mapstructure:"image_force_delete_instances" cty:"image_force_delete_instances" hcl:"image_force_delete_instances"`
AlicloudImageIgnoreDataDisks *bool `mapstructure:"image_ignore_data_disks" required:"false" cty:"image_ignore_data_disks" hcl:"image_ignore_data_disks"`
AlicloudImageTags map[string]string `mapstructure:"tags" required:"false" cty:"tags" hcl:"tags"`
AlicloudImageTag []config.FlatKeyValue `mapstructure:"tag" required:"false" cty:"tag" hcl:"tag"`
ECSSystemDiskMapping *FlatAlicloudDiskDevice `mapstructure:"system_disk_mapping" required:"false" cty:"system_disk_mapping" hcl:"system_disk_mapping"`
ECSImagesDiskMappings []FlatAlicloudDiskDevice `mapstructure:"image_disk_mappings" required:"false" cty:"image_disk_mappings" hcl:"image_disk_mappings"`
AssociatePublicIpAddress *bool `mapstructure:"associate_public_ip_address" cty:"associate_public_ip_address" hcl:"associate_public_ip_address"`
ZoneId *string `mapstructure:"zone_id" required:"false" cty:"zone_id" hcl:"zone_id"`
IOOptimized *bool `mapstructure:"io_optimized" required:"false" cty:"io_optimized" hcl:"io_optimized"`
InstanceType *string `mapstructure:"instance_type" required:"true" cty:"instance_type" hcl:"instance_type"`
Description *string `mapstructure:"description" cty:"description" hcl:"description"`
AlicloudSourceImage *string `mapstructure:"source_image" required:"true" cty:"source_image" hcl:"source_image"`
ForceStopInstance *bool `mapstructure:"force_stop_instance" required:"false" cty:"force_stop_instance" hcl:"force_stop_instance"`
DisableStopInstance *bool `mapstructure:"disable_stop_instance" required:"false" cty:"disable_stop_instance" hcl:"disable_stop_instance"`
RamRoleName *string `mapstructure:"ram_role_name" required:"false" cty:"ram_role_name" hcl:"ram_role_name"`
SecurityGroupId *string `mapstructure:"security_group_id" required:"false" cty:"security_group_id" hcl:"security_group_id"`
SecurityGroupName *string `mapstructure:"security_group_name" required:"false" cty:"security_group_name" hcl:"security_group_name"`
UserData *string `mapstructure:"user_data" required:"false" cty:"user_data" hcl:"user_data"`
UserDataFile *string `mapstructure:"user_data_file" required:"false" cty:"user_data_file" hcl:"user_data_file"`
VpcId *string `mapstructure:"vpc_id" required:"false" cty:"vpc_id" hcl:"vpc_id"`
VpcName *string `mapstructure:"vpc_name" required:"false" cty:"vpc_name" hcl:"vpc_name"`
CidrBlock *string `mapstructure:"vpc_cidr_block" required:"false" cty:"vpc_cidr_block" hcl:"vpc_cidr_block"`
VSwitchId *string `mapstructure:"vswitch_id" required:"false" cty:"vswitch_id" hcl:"vswitch_id"`
VSwitchName *string `mapstructure:"vswitch_name" required:"false" cty:"vswitch_name" hcl:"vswitch_name"`
InstanceName *string `mapstructure:"instance_name" required:"false" cty:"instance_name" hcl:"instance_name"`
InternetChargeType *string `mapstructure:"internet_charge_type" required:"false" cty:"internet_charge_type" hcl:"internet_charge_type"`
InternetMaxBandwidthOut *int `mapstructure:"internet_max_bandwidth_out" required:"false" cty:"internet_max_bandwidth_out" hcl:"internet_max_bandwidth_out"`
WaitSnapshotReadyTimeout *int `mapstructure:"wait_snapshot_ready_timeout" required:"false" cty:"wait_snapshot_ready_timeout" hcl:"wait_snapshot_ready_timeout"`
Type *string `mapstructure:"communicator" cty:"communicator" hcl:"communicator"`
PauseBeforeConnect *string `mapstructure:"pause_before_connecting" cty:"pause_before_connecting" hcl:"pause_before_connecting"`
SSHHost *string `mapstructure:"ssh_host" cty:"ssh_host" hcl:"ssh_host"`
SSHPort *int `mapstructure:"ssh_port" cty:"ssh_port" hcl:"ssh_port"`
SSHUsername *string `mapstructure:"ssh_username" cty:"ssh_username" hcl:"ssh_username"`
SSHPassword *string `mapstructure:"ssh_password" cty:"ssh_password" hcl:"ssh_password"`
SSHKeyPairName *string `mapstructure:"ssh_keypair_name" undocumented:"true" cty:"ssh_keypair_name" hcl:"ssh_keypair_name"`
SSHTemporaryKeyPairName *string `mapstructure:"temporary_key_pair_name" undocumented:"true" cty:"temporary_key_pair_name" hcl:"temporary_key_pair_name"`
SSHTemporaryKeyPairType *string `mapstructure:"temporary_key_pair_type" cty:"temporary_key_pair_type" hcl:"temporary_key_pair_type"`
SSHTemporaryKeyPairBits *int `mapstructure:"temporary_key_pair_bits" cty:"temporary_key_pair_bits" hcl:"temporary_key_pair_bits"`
SSHCiphers []string `mapstructure:"ssh_ciphers" cty:"ssh_ciphers" hcl:"ssh_ciphers"`
SSHClearAuthorizedKeys *bool `mapstructure:"ssh_clear_authorized_keys" cty:"ssh_clear_authorized_keys" hcl:"ssh_clear_authorized_keys"`
SSHKEXAlgos []string `mapstructure:"ssh_key_exchange_algorithms" cty:"ssh_key_exchange_algorithms" hcl:"ssh_key_exchange_algorithms"`
SSHPrivateKeyFile *string `mapstructure:"ssh_private_key_file" undocumented:"true" cty:"ssh_private_key_file" hcl:"ssh_private_key_file"`
SSHCertificateFile *string `mapstructure:"ssh_certificate_file" cty:"ssh_certificate_file" hcl:"ssh_certificate_file"`
SSHPty *bool `mapstructure:"ssh_pty" cty:"ssh_pty" hcl:"ssh_pty"`
SSHTimeout *string `mapstructure:"ssh_timeout" cty:"ssh_timeout" hcl:"ssh_timeout"`
SSHWaitTimeout *string `mapstructure:"ssh_wait_timeout" undocumented:"true" cty:"ssh_wait_timeout" hcl:"ssh_wait_timeout"`
SSHAgentAuth *bool `mapstructure:"ssh_agent_auth" undocumented:"true" cty:"ssh_agent_auth" hcl:"ssh_agent_auth"`
SSHDisableAgentForwarding *bool `mapstructure:"ssh_disable_agent_forwarding" cty:"ssh_disable_agent_forwarding" hcl:"ssh_disable_agent_forwarding"`
SSHHandshakeAttempts *int `mapstructure:"ssh_handshake_attempts" cty:"ssh_handshake_attempts" hcl:"ssh_handshake_attempts"`
SSHBastionHost *string `mapstructure:"ssh_bastion_host" cty:"ssh_bastion_host" hcl:"ssh_bastion_host"`
SSHBastionPort *int `mapstructure:"ssh_bastion_port" cty:"ssh_bastion_port" hcl:"ssh_bastion_port"`
SSHBastionAgentAuth *bool `mapstructure:"ssh_bastion_agent_auth" cty:"ssh_bastion_agent_auth" hcl:"ssh_bastion_agent_auth"`
SSHBastionUsername *string `mapstructure:"ssh_bastion_username" cty:"ssh_bastion_username" hcl:"ssh_bastion_username"`
SSHBastionPassword *string `mapstructure:"ssh_bastion_password" cty:"ssh_bastion_password" hcl:"ssh_bastion_password"`
SSHBastionInteractive *bool `mapstructure:"ssh_bastion_interactive" cty:"ssh_bastion_interactive" hcl:"ssh_bastion_interactive"`
SSHBastionPrivateKeyFile *string `mapstructure:"ssh_bastion_private_key_file" cty:"ssh_bastion_private_key_file" hcl:"ssh_bastion_private_key_file"`
SSHBastionCertificateFile *string `mapstructure:"ssh_bastion_certificate_file" cty:"ssh_bastion_certificate_file" hcl:"ssh_bastion_certificate_file"`
SSHFileTransferMethod *string `mapstructure:"ssh_file_transfer_method" cty:"ssh_file_transfer_method" hcl:"ssh_file_transfer_method"`
SSHProxyHost *string `mapstructure:"ssh_proxy_host" cty:"ssh_proxy_host" hcl:"ssh_proxy_host"`
SSHProxyPort *int `mapstructure:"ssh_proxy_port" cty:"ssh_proxy_port" hcl:"ssh_proxy_port"`
SSHProxyUsername *string `mapstructure:"ssh_proxy_username" cty:"ssh_proxy_username" hcl:"ssh_proxy_username"`
SSHProxyPassword *string `mapstructure:"ssh_proxy_password" cty:"ssh_proxy_password" hcl:"ssh_proxy_password"`
SSHKeepAliveInterval *string `mapstructure:"ssh_keep_alive_interval" cty:"ssh_keep_alive_interval" hcl:"ssh_keep_alive_interval"`
SSHReadWriteTimeout *string `mapstructure:"ssh_read_write_timeout" cty:"ssh_read_write_timeout" hcl:"ssh_read_write_timeout"`
SSHRemoteTunnels []string `mapstructure:"ssh_remote_tunnels" cty:"ssh_remote_tunnels" hcl:"ssh_remote_tunnels"`
SSHLocalTunnels []string `mapstructure:"ssh_local_tunnels" cty:"ssh_local_tunnels" hcl:"ssh_local_tunnels"`
SSHPublicKey []byte `mapstructure:"ssh_public_key" undocumented:"true" cty:"ssh_public_key" hcl:"ssh_public_key"`
SSHPrivateKey []byte `mapstructure:"ssh_private_key" undocumented:"true" cty:"ssh_private_key" hcl:"ssh_private_key"`
WinRMUser *string `mapstructure:"winrm_username" cty:"winrm_username" hcl:"winrm_username"`
WinRMPassword *string `mapstructure:"winrm_password" cty:"winrm_password" hcl:"winrm_password"`
WinRMHost *string `mapstructure:"winrm_host" cty:"winrm_host" hcl:"winrm_host"`
WinRMNoProxy *bool `mapstructure:"winrm_no_proxy" cty:"winrm_no_proxy" hcl:"winrm_no_proxy"`
WinRMPort *int `mapstructure:"winrm_port" cty:"winrm_port" hcl:"winrm_port"`
WinRMTimeout *string `mapstructure:"winrm_timeout" cty:"winrm_timeout" hcl:"winrm_timeout"`
WinRMUseSSL *bool `mapstructure:"winrm_use_ssl" cty:"winrm_use_ssl" hcl:"winrm_use_ssl"`
WinRMInsecure *bool `mapstructure:"winrm_insecure" cty:"winrm_insecure" hcl:"winrm_insecure"`
WinRMUseNTLM *bool `mapstructure:"winrm_use_ntlm" cty:"winrm_use_ntlm" hcl:"winrm_use_ntlm"`
SSHPrivateIp *bool `mapstructure:"ssh_private_ip" required:"false" cty:"ssh_private_ip" hcl:"ssh_private_ip"`
}
// FlatMapstructure returns a new FlatConfig.
// FlatConfig is an auto-generated flat version of Config.
// Where the contents a fields with a `mapstructure:,squash` tag are bubbled up.
func (*Config) FlatMapstructure() interface{ HCL2Spec() map[string]hcldec.Spec } {
return new(FlatConfig)
}
// HCL2Spec returns the hcl spec of a Config.
// This spec is used by HCL to read the fields of Config.
// The decoded values from this spec will then be applied to a FlatConfig.
func (*FlatConfig) HCL2Spec() map[string]hcldec.Spec {
s := map[string]hcldec.Spec{
"packer_build_name": &hcldec.AttrSpec{Name: "packer_build_name", Type: cty.String, Required: false},
"packer_builder_type": &hcldec.AttrSpec{Name: "packer_builder_type", Type: cty.String, Required: false},
"packer_core_version": &hcldec.AttrSpec{Name: "packer_core_version", Type: cty.String, Required: false},
"packer_debug": &hcldec.AttrSpec{Name: "packer_debug", Type: cty.Bool, Required: false},
"packer_force": &hcldec.AttrSpec{Name: "packer_force", Type: cty.Bool, Required: false},
"packer_on_error": &hcldec.AttrSpec{Name: "packer_on_error", Type: cty.String, Required: false},
"packer_user_variables": &hcldec.AttrSpec{Name: "packer_user_variables", Type: cty.Map(cty.String), Required: false},
"packer_sensitive_variables": &hcldec.AttrSpec{Name: "packer_sensitive_variables", Type: cty.List(cty.String), Required: false},
"access_key": &hcldec.AttrSpec{Name: "access_key", Type: cty.String, Required: false},
"secret_key": &hcldec.AttrSpec{Name: "secret_key", Type: cty.String, Required: false},
"region": &hcldec.AttrSpec{Name: "region", Type: cty.String, Required: false},
"skip_region_validation": &hcldec.AttrSpec{Name: "skip_region_validation", Type: cty.Bool, Required: false},
"skip_image_validation": &hcldec.AttrSpec{Name: "skip_image_validation", Type: cty.Bool, Required: false},
"profile": &hcldec.AttrSpec{Name: "profile", Type: cty.String, Required: false},
"shared_credentials_file": &hcldec.AttrSpec{Name: "shared_credentials_file", Type: cty.String, Required: false},
"security_token": &hcldec.AttrSpec{Name: "security_token", Type: cty.String, Required: false},
"image_name": &hcldec.AttrSpec{Name: "image_name", Type: cty.String, Required: false},
"image_version": &hcldec.AttrSpec{Name: "image_version", Type: cty.String, Required: false},
"image_description": &hcldec.AttrSpec{Name: "image_description", Type: cty.String, Required: false},
"image_share_account": &hcldec.AttrSpec{Name: "image_share_account", Type: cty.List(cty.String), Required: false},
"image_unshare_account": &hcldec.AttrSpec{Name: "image_unshare_account", Type: cty.List(cty.String), Required: false},
"image_copy_regions": &hcldec.AttrSpec{Name: "image_copy_regions", Type: cty.List(cty.String), Required: false},
"image_copy_names": &hcldec.AttrSpec{Name: "image_copy_names", Type: cty.List(cty.String), Required: false},
"image_encrypted": &hcldec.AttrSpec{Name: "image_encrypted", Type: cty.Bool, Required: false},
"image_force_delete": &hcldec.AttrSpec{Name: "image_force_delete", Type: cty.Bool, Required: false},
"image_force_delete_snapshots": &hcldec.AttrSpec{Name: "image_force_delete_snapshots", Type: cty.Bool, Required: false},
"image_force_delete_instances": &hcldec.AttrSpec{Name: "image_force_delete_instances", Type: cty.Bool, Required: false},
"image_ignore_data_disks": &hcldec.AttrSpec{Name: "image_ignore_data_disks", Type: cty.Bool, Required: false},
"tags": &hcldec.AttrSpec{Name: "tags", Type: cty.Map(cty.String), Required: false},
"tag": &hcldec.BlockListSpec{TypeName: "tag", Nested: hcldec.ObjectSpec((*config.FlatKeyValue)(nil).HCL2Spec())},
"system_disk_mapping": &hcldec.BlockSpec{TypeName: "system_disk_mapping", Nested: hcldec.ObjectSpec((*FlatAlicloudDiskDevice)(nil).HCL2Spec())},
"image_disk_mappings": &hcldec.BlockListSpec{TypeName: "image_disk_mappings", Nested: hcldec.ObjectSpec((*FlatAlicloudDiskDevice)(nil).HCL2Spec())},
"associate_public_ip_address": &hcldec.AttrSpec{Name: "associate_public_ip_address", Type: cty.Bool, Required: false},
"zone_id": &hcldec.AttrSpec{Name: "zone_id", Type: cty.String, Required: false},
"io_optimized": &hcldec.AttrSpec{Name: "io_optimized", Type: cty.Bool, Required: false},
"instance_type": &hcldec.AttrSpec{Name: "instance_type", Type: cty.String, Required: false},
"description": &hcldec.AttrSpec{Name: "description", Type: cty.String, Required: false},
"source_image": &hcldec.AttrSpec{Name: "source_image", Type: cty.String, Required: false},
"force_stop_instance": &hcldec.AttrSpec{Name: "force_stop_instance", Type: cty.Bool, Required: false},
"disable_stop_instance": &hcldec.AttrSpec{Name: "disable_stop_instance", Type: cty.Bool, Required: false},
"ram_role_name": &hcldec.AttrSpec{Name: "ram_role_name", Type: cty.String, Required: false},
"security_group_id": &hcldec.AttrSpec{Name: "security_group_id", Type: cty.String, Required: false},
"security_group_name": &hcldec.AttrSpec{Name: "security_group_name", Type: cty.String, Required: false},
"user_data": &hcldec.AttrSpec{Name: "user_data", Type: cty.String, Required: false},
"user_data_file": &hcldec.AttrSpec{Name: "user_data_file", Type: cty.String, Required: false},
"vpc_id": &hcldec.AttrSpec{Name: "vpc_id", Type: cty.String, Required: false},
"vpc_name": &hcldec.AttrSpec{Name: "vpc_name", Type: cty.String, Required: false},
"vpc_cidr_block": &hcldec.AttrSpec{Name: "vpc_cidr_block", Type: cty.String, Required: false},
"vswitch_id": &hcldec.AttrSpec{Name: "vswitch_id", Type: cty.String, Required: false},
"vswitch_name": &hcldec.AttrSpec{Name: "vswitch_name", Type: cty.String, Required: false},
"instance_name": &hcldec.AttrSpec{Name: "instance_name", Type: cty.String, Required: false},
"internet_charge_type": &hcldec.AttrSpec{Name: "internet_charge_type", Type: cty.String, Required: false},
"internet_max_bandwidth_out": &hcldec.AttrSpec{Name: "internet_max_bandwidth_out", Type: cty.Number, Required: false},
"wait_snapshot_ready_timeout": &hcldec.AttrSpec{Name: "wait_snapshot_ready_timeout", Type: cty.Number, Required: false},
"communicator": &hcldec.AttrSpec{Name: "communicator", Type: cty.String, Required: false},
"pause_before_connecting": &hcldec.AttrSpec{Name: "pause_before_connecting", Type: cty.String, Required: false},
"ssh_host": &hcldec.AttrSpec{Name: "ssh_host", Type: cty.String, Required: false},
"ssh_port": &hcldec.AttrSpec{Name: "ssh_port", Type: cty.Number, Required: false},
"ssh_username": &hcldec.AttrSpec{Name: "ssh_username", Type: cty.String, Required: false},
"ssh_password": &hcldec.AttrSpec{Name: "ssh_password", Type: cty.String, Required: false},
"ssh_keypair_name": &hcldec.AttrSpec{Name: "ssh_keypair_name", Type: cty.String, Required: false},
"temporary_key_pair_name": &hcldec.AttrSpec{Name: "temporary_key_pair_name", Type: cty.String, Required: false},
"temporary_key_pair_type": &hcldec.AttrSpec{Name: "temporary_key_pair_type", Type: cty.String, Required: false},
"temporary_key_pair_bits": &hcldec.AttrSpec{Name: "temporary_key_pair_bits", Type: cty.Number, Required: false},
"ssh_ciphers": &hcldec.AttrSpec{Name: "ssh_ciphers", Type: cty.List(cty.String), Required: false},
"ssh_clear_authorized_keys": &hcldec.AttrSpec{Name: "ssh_clear_authorized_keys", Type: cty.Bool, Required: false},
"ssh_key_exchange_algorithms": &hcldec.AttrSpec{Name: "ssh_key_exchange_algorithms", Type: cty.List(cty.String), Required: false},
"ssh_private_key_file": &hcldec.AttrSpec{Name: "ssh_private_key_file", Type: cty.String, Required: false},
"ssh_certificate_file": &hcldec.AttrSpec{Name: "ssh_certificate_file", Type: cty.String, Required: false},
"ssh_pty": &hcldec.AttrSpec{Name: "ssh_pty", Type: cty.Bool, Required: false},
"ssh_timeout": &hcldec.AttrSpec{Name: "ssh_timeout", Type: cty.String, Required: false},
"ssh_wait_timeout": &hcldec.AttrSpec{Name: "ssh_wait_timeout", Type: cty.String, Required: false},
"ssh_agent_auth": &hcldec.AttrSpec{Name: "ssh_agent_auth", Type: cty.Bool, Required: false},
"ssh_disable_agent_forwarding": &hcldec.AttrSpec{Name: "ssh_disable_agent_forwarding", Type: cty.Bool, Required: false},
"ssh_handshake_attempts": &hcldec.AttrSpec{Name: "ssh_handshake_attempts", Type: cty.Number, Required: false},
"ssh_bastion_host": &hcldec.AttrSpec{Name: "ssh_bastion_host", Type: cty.String, Required: false},
"ssh_bastion_port": &hcldec.AttrSpec{Name: "ssh_bastion_port", Type: cty.Number, Required: false},
"ssh_bastion_agent_auth": &hcldec.AttrSpec{Name: "ssh_bastion_agent_auth", Type: cty.Bool, Required: false},
"ssh_bastion_username": &hcldec.AttrSpec{Name: "ssh_bastion_username", Type: cty.String, Required: false},
"ssh_bastion_password": &hcldec.AttrSpec{Name: "ssh_bastion_password", Type: cty.String, Required: false},
"ssh_bastion_interactive": &hcldec.AttrSpec{Name: "ssh_bastion_interactive", Type: cty.Bool, Required: false},
"ssh_bastion_private_key_file": &hcldec.AttrSpec{Name: "ssh_bastion_private_key_file", Type: cty.String, Required: false},
"ssh_bastion_certificate_file": &hcldec.AttrSpec{Name: "ssh_bastion_certificate_file", Type: cty.String, Required: false},
"ssh_file_transfer_method": &hcldec.AttrSpec{Name: "ssh_file_transfer_method", Type: cty.String, Required: false},
"ssh_proxy_host": &hcldec.AttrSpec{Name: "ssh_proxy_host", Type: cty.String, Required: false},
"ssh_proxy_port": &hcldec.AttrSpec{Name: "ssh_proxy_port", Type: cty.Number, Required: false},
"ssh_proxy_username": &hcldec.AttrSpec{Name: "ssh_proxy_username", Type: cty.String, Required: false},
"ssh_proxy_password": &hcldec.AttrSpec{Name: "ssh_proxy_password", Type: cty.String, Required: false},
"ssh_keep_alive_interval": &hcldec.AttrSpec{Name: "ssh_keep_alive_interval", Type: cty.String, Required: false},
"ssh_read_write_timeout": &hcldec.AttrSpec{Name: "ssh_read_write_timeout", Type: cty.String, Required: false},
"ssh_remote_tunnels": &hcldec.AttrSpec{Name: "ssh_remote_tunnels", Type: cty.List(cty.String), Required: false},
"ssh_local_tunnels": &hcldec.AttrSpec{Name: "ssh_local_tunnels", Type: cty.List(cty.String), Required: false},
"ssh_public_key": &hcldec.AttrSpec{Name: "ssh_public_key", Type: cty.List(cty.Number), Required: false},
"ssh_private_key": &hcldec.AttrSpec{Name: "ssh_private_key", Type: cty.List(cty.Number), Required: false},
"winrm_username": &hcldec.AttrSpec{Name: "winrm_username", Type: cty.String, Required: false},
"winrm_password": &hcldec.AttrSpec{Name: "winrm_password", Type: cty.String, Required: false},
"winrm_host": &hcldec.AttrSpec{Name: "winrm_host", Type: cty.String, Required: false},
"winrm_no_proxy": &hcldec.AttrSpec{Name: "winrm_no_proxy", Type: cty.Bool, Required: false},
"winrm_port": &hcldec.AttrSpec{Name: "winrm_port", Type: cty.Number, Required: false},
"winrm_timeout": &hcldec.AttrSpec{Name: "winrm_timeout", Type: cty.String, Required: false},
"winrm_use_ssl": &hcldec.AttrSpec{Name: "winrm_use_ssl", Type: cty.Bool, Required: false},
"winrm_insecure": &hcldec.AttrSpec{Name: "winrm_insecure", Type: cty.Bool, Required: false},
"winrm_use_ntlm": &hcldec.AttrSpec{Name: "winrm_use_ntlm", Type: cty.Bool, Required: false},
"ssh_private_ip": &hcldec.AttrSpec{Name: "ssh_private_ip", Type: cty.Bool, Required: false},
}
return s
}

View File

@ -1,891 +0,0 @@
package ecs
import (
"encoding/json"
"fmt"
"os"
"strings"
"testing"
"github.com/aliyun/alibaba-cloud-sdk-go/services/ecs"
builderT "github.com/hashicorp/packer-plugin-sdk/acctest"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
)
const defaultTestRegion = "cn-beijing"
func TestBuilderAcc_validateRegion(t *testing.T) {
t.Parallel()
if os.Getenv(builderT.TestEnvVar) == "" {
t.Skip(fmt.Sprintf("Acceptance tests skipped unless env '%s' set", builderT.TestEnvVar))
return
}
testAccPreCheck(t)
access := &AlicloudAccessConfig{AlicloudRegion: "cn-beijing"}
err := access.Config()
if err != nil {
t.Fatalf("init AlicloudAccessConfig failed: %s", err)
}
err = access.ValidateRegion("cn-hangzhou")
if err != nil {
t.Fatalf("Expect pass with valid region id but failed: %s", err)
}
err = access.ValidateRegion("invalidRegionId")
if err == nil {
t.Fatal("Expect failure due to invalid region id but passed")
}
}
func TestBuilderAcc_basic(t *testing.T) {
t.Parallel()
builderT.Test(t, builderT.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
Builder: &Builder{},
Template: testBuilderAccBasic,
})
}
const testBuilderAccBasic = `
{ "builders": [{
"type": "test",
"region": "cn-beijing",
"instance_type": "ecs.n1.tiny",
"source_image":"ubuntu_18_04_64_20G_alibase_20190509.vhd",
"io_optimized":"true",
"ssh_username":"root",
"image_name": "packer-test-basic_{{timestamp}}"
}]
}`
func TestBuilderAcc_withDiskSettings(t *testing.T) {
t.Parallel()
builderT.Test(t, builderT.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
Builder: &Builder{},
Template: testBuilderAccWithDiskSettings,
Check: checkImageDisksSettings(),
})
}
const testBuilderAccWithDiskSettings = `
{ "builders": [{
"type": "test",
"region": "cn-beijing",
"instance_type": "ecs.n1.tiny",
"source_image":"ubuntu_18_04_64_20G_alibase_20190509.vhd",
"io_optimized":"true",
"ssh_username":"root",
"image_name": "packer-test-withDiskSettings_{{timestamp}}",
"system_disk_mapping": {
"disk_size": 60
},
"image_disk_mappings": [
{
"disk_name": "datadisk1",
"disk_size": 25,
"disk_delete_with_instance": true
},
{
"disk_name": "datadisk2",
"disk_size": 25,
"disk_delete_with_instance": true
}
]
}]
}`
func checkImageDisksSettings() builderT.TestCheckFunc {
return func(artifacts []packersdk.Artifact) error {
if len(artifacts) > 1 {
return fmt.Errorf("more than 1 artifact")
}
// Get the actual *Artifact pointer so we can access the AMIs directly
artifactRaw := artifacts[0]
artifact, ok := artifactRaw.(*Artifact)
if !ok {
return fmt.Errorf("unknown artifact: %#v", artifactRaw)
}
imageId := artifact.AlicloudImages[defaultTestRegion]
// describe the image, get block devices with a snapshot
client, _ := testAliyunClient()
describeImagesRequest := ecs.CreateDescribeImagesRequest()
describeImagesRequest.RegionId = defaultTestRegion
describeImagesRequest.ImageId = imageId
imagesResponse, err := client.DescribeImages(describeImagesRequest)
if err != nil {
return fmt.Errorf("describe images failed due to %s", err)
}
if len(imagesResponse.Images.Image) == 0 {
return fmt.Errorf("image %s generated can not be found", imageId)
}
image := imagesResponse.Images.Image[0]
if image.Size != 60 {
return fmt.Errorf("the size of image %s should be equal to 60G but got %dG", imageId, image.Size)
}
if len(image.DiskDeviceMappings.DiskDeviceMapping) != 3 {
return fmt.Errorf("image %s should contains 3 disks", imageId)
}
var snapshotIds []string
for _, mapping := range image.DiskDeviceMappings.DiskDeviceMapping {
if mapping.Type == DiskTypeSystem {
if mapping.Size != "60" {
return fmt.Errorf("the system snapshot size of image %s should be equal to 60G but got %sG", imageId, mapping.Size)
}
} else {
if mapping.Size != "25" {
return fmt.Errorf("the data disk size of image %s should be equal to 25G but got %sG", imageId, mapping.Size)
}
snapshotIds = append(snapshotIds, mapping.SnapshotId)
}
}
data, _ := json.Marshal(snapshotIds)
describeSnapshotRequest := ecs.CreateDescribeSnapshotsRequest()
describeSnapshotRequest.RegionId = defaultTestRegion
describeSnapshotRequest.SnapshotIds = string(data)
describeSnapshotsResponse, err := client.DescribeSnapshots(describeSnapshotRequest)
if err != nil {
return fmt.Errorf("describe data snapshots failed due to %s", err)
}
if len(describeSnapshotsResponse.Snapshots.Snapshot) != 2 {
return fmt.Errorf("expect %d data snapshots but got %d", len(snapshotIds), len(describeSnapshotsResponse.Snapshots.Snapshot))
}
var dataDiskIds []string
for _, snapshot := range describeSnapshotsResponse.Snapshots.Snapshot {
dataDiskIds = append(dataDiskIds, snapshot.SourceDiskId)
}
data, _ = json.Marshal(dataDiskIds)
describeDisksRequest := ecs.CreateDescribeDisksRequest()
describeDisksRequest.RegionId = defaultTestRegion
describeDisksRequest.DiskIds = string(data)
describeDisksResponse, err := client.DescribeDisks(describeDisksRequest)
if err != nil {
return fmt.Errorf("describe snapshots failed due to %s", err)
}
if len(describeDisksResponse.Disks.Disk) != 0 {
return fmt.Errorf("data disks should be deleted but %d left", len(describeDisksResponse.Disks.Disk))
}
return nil
}
}
func TestBuilderAcc_withIgnoreDataDisks(t *testing.T) {
t.Parallel()
builderT.Test(t, builderT.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
Builder: &Builder{},
Template: testBuilderAccIgnoreDataDisks,
Check: checkIgnoreDataDisks(),
})
}
const testBuilderAccIgnoreDataDisks = `
{ "builders": [{
"type": "test",
"region": "cn-beijing",
"instance_type": "ecs.gn5-c8g1.2xlarge",
"source_image":"ubuntu_18_04_64_20G_alibase_20190509.vhd",
"io_optimized":"true",
"ssh_username":"root",
"image_name": "packer-test-ignoreDataDisks_{{timestamp}}",
"image_ignore_data_disks": true
}]
}`
func checkIgnoreDataDisks() builderT.TestCheckFunc {
return func(artifacts []packersdk.Artifact) error {
if len(artifacts) > 1 {
return fmt.Errorf("more than 1 artifact")
}
// Get the actual *Artifact pointer so we can access the AMIs directly
artifactRaw := artifacts[0]
artifact, ok := artifactRaw.(*Artifact)
if !ok {
return fmt.Errorf("unknown artifact: %#v", artifactRaw)
}
imageId := artifact.AlicloudImages[defaultTestRegion]
// describe the image, get block devices with a snapshot
client, _ := testAliyunClient()
describeImagesRequest := ecs.CreateDescribeImagesRequest()
describeImagesRequest.RegionId = defaultTestRegion
describeImagesRequest.ImageId = imageId
imagesResponse, err := client.DescribeImages(describeImagesRequest)
if err != nil {
return fmt.Errorf("describe images failed due to %s", err)
}
if len(imagesResponse.Images.Image) == 0 {
return fmt.Errorf("image %s generated can not be found", imageId)
}
image := imagesResponse.Images.Image[0]
if len(image.DiskDeviceMappings.DiskDeviceMapping) != 1 {
return fmt.Errorf("image %s should only contain one disks", imageId)
}
return nil
}
}
func TestBuilderAcc_windows(t *testing.T) {
t.Parallel()
builderT.Test(t, builderT.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
Builder: &Builder{},
Template: testBuilderAccWindows,
})
}
const testBuilderAccWindows = `
{ "builders": [{
"type": "test",
"region": "cn-beijing",
"instance_type": "ecs.n1.tiny",
"source_image":"winsvr_64_dtcC_1809_en-us_40G_alibase_20190318.vhd",
"io_optimized":"true",
"communicator": "winrm",
"winrm_port": 5985,
"winrm_username": "Administrator",
"winrm_password": "Test1234",
"image_name": "packer-test-windows_{{timestamp}}",
"user_data_file": "../../../examples/alicloud/basic/winrm_enable_userdata.ps1"
}]
}`
func TestBuilderAcc_regionCopy(t *testing.T) {
t.Parallel()
builderT.Test(t, builderT.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
Builder: &Builder{},
Template: testBuilderAccRegionCopy,
Check: checkRegionCopy([]string{"cn-hangzhou", "cn-shenzhen"}),
})
}
const testBuilderAccRegionCopy = `
{
"builders": [{
"type": "test",
"region": "cn-beijing",
"instance_type": "ecs.n1.tiny",
"source_image":"ubuntu_18_04_64_20G_alibase_20190509.vhd",
"io_optimized":"true",
"ssh_username":"root",
"image_name": "packer-test-regionCopy_{{timestamp}}",
"image_copy_regions": ["cn-hangzhou", "cn-shenzhen"],
"image_copy_names": ["packer-copy-test-hz_{{timestamp}}", "packer-copy-test-sz_{{timestamp}}"]
}]
}
`
func checkRegionCopy(regions []string) builderT.TestCheckFunc {
return func(artifacts []packersdk.Artifact) error {
if len(artifacts) > 1 {
return fmt.Errorf("more than 1 artifact")
}
// Get the actual *Artifact pointer so we can access the AMIs directly
artifactRaw := artifacts[0]
artifact, ok := artifactRaw.(*Artifact)
if !ok {
return fmt.Errorf("unknown artifact: %#v", artifactRaw)
}
// Verify that we copied to only the regions given
regionSet := make(map[string]struct{})
for _, r := range regions {
regionSet[r] = struct{}{}
}
for r := range artifact.AlicloudImages {
if r == "cn-beijing" {
delete(regionSet, r)
continue
}
if _, ok := regionSet[r]; !ok {
return fmt.Errorf("region %s is not the target region but found in artifacts", r)
}
delete(regionSet, r)
}
if len(regionSet) > 0 {
return fmt.Errorf("following region(s) should be the copying targets but corresponding artifact(s) not found: %#v", regionSet)
}
client, _ := testAliyunClient()
for regionId, imageId := range artifact.AlicloudImages {
describeImagesRequest := ecs.CreateDescribeImagesRequest()
describeImagesRequest.RegionId = regionId
describeImagesRequest.ImageId = imageId
describeImagesRequest.Status = ImageStatusQueried
describeImagesResponse, err := client.DescribeImages(describeImagesRequest)
if err != nil {
return fmt.Errorf("describe generated image %s failed due to %s", imageId, err)
}
if len(describeImagesResponse.Images.Image) == 0 {
return fmt.Errorf("image %s in artifacts can not be found", imageId)
}
image := describeImagesResponse.Images.Image[0]
if image.IsCopied && regionId == "cn-hangzhou" && !strings.HasPrefix(image.ImageName, "packer-copy-test-hz") {
return fmt.Errorf("the name of image %s in artifacts should begin with %s but got %s", imageId, "packer-copy-test-hz", image.ImageName)
}
if image.IsCopied && regionId == "cn-shenzhen" && !strings.HasPrefix(image.ImageName, "packer-copy-test-sz") {
return fmt.Errorf("the name of image %s in artifacts should begin with %s but got %s", imageId, "packer-copy-test-sz", image.ImageName)
}
}
return nil
}
}
func TestBuilderAcc_forceDelete(t *testing.T) {
t.Parallel()
// Build the same alicloud image twice, with ecs_image_force_delete on the second run
builderT.Test(t, builderT.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
Builder: &Builder{},
Template: buildForceDeregisterConfig("false", "delete"),
SkipArtifactTeardown: true,
})
builderT.Test(t, builderT.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
Builder: &Builder{},
Template: buildForceDeregisterConfig("true", "delete"),
})
}
func buildForceDeregisterConfig(val, name string) string {
return fmt.Sprintf(testBuilderAccForceDelete, val, name)
}
const testBuilderAccForceDelete = `
{
"builders": [{
"type": "test",
"region": "cn-beijing",
"instance_type": "ecs.n1.tiny",
"source_image":"ubuntu_18_04_64_20G_alibase_20190509.vhd",
"io_optimized":"true",
"ssh_username":"root",
"image_force_delete": "%s",
"image_name": "packer-test-forceDelete_%s"
}]
}
`
func TestBuilderAcc_ECSImageSharing(t *testing.T) {
t.Parallel()
builderT.Test(t, builderT.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
Builder: &Builder{},
Template: testBuilderAccSharing,
Check: checkECSImageSharing("1309208528360047"),
})
}
const testBuilderAccSharing = `
{
"builders": [{
"type": "test",
"region": "cn-beijing",
"instance_type": "ecs.n1.tiny",
"source_image":"ubuntu_18_04_64_20G_alibase_20190509.vhd",
"io_optimized":"true",
"ssh_username":"root",
"image_name": "packer-test-ECSImageSharing_{{timestamp}}",
"image_share_account":["1309208528360047"]
}]
}
`
func checkECSImageSharing(uid string) builderT.TestCheckFunc {
return func(artifacts []packersdk.Artifact) error {
if len(artifacts) > 1 {
return fmt.Errorf("more than 1 artifact")
}
// Get the actual *Artifact pointer so we can access the AMIs directly
artifactRaw := artifacts[0]
artifact, ok := artifactRaw.(*Artifact)
if !ok {
return fmt.Errorf("unknown artifact: %#v", artifactRaw)
}
// describe the image, get block devices with a snapshot
client, _ := testAliyunClient()
describeImageShareRequest := ecs.CreateDescribeImageSharePermissionRequest()
describeImageShareRequest.RegionId = "cn-beijing"
describeImageShareRequest.ImageId = artifact.AlicloudImages["cn-beijing"]
imageShareResponse, err := client.DescribeImageSharePermission(describeImageShareRequest)
if err != nil {
return fmt.Errorf("Error retrieving Image Attributes for ECS Image Artifact (%#v) "+
"in ECS Image Sharing Test: %s", artifact, err)
}
if len(imageShareResponse.Accounts.Account) != 1 && imageShareResponse.Accounts.Account[0].AliyunId != uid {
return fmt.Errorf("share account is incorrect %d", len(imageShareResponse.Accounts.Account))
}
return nil
}
}
func TestBuilderAcc_forceDeleteSnapshot(t *testing.T) {
t.Parallel()
destImageName := "delete"
// Build the same alicloud image name twice, with force_delete_snapshot on the second run
builderT.Test(t, builderT.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
Builder: &Builder{},
Template: buildForceDeleteSnapshotConfig("false", destImageName),
SkipArtifactTeardown: true,
})
// Get image data by image image name
client, _ := testAliyunClient()
describeImagesRequest := ecs.CreateDescribeImagesRequest()
describeImagesRequest.RegionId = "cn-beijing"
describeImagesRequest.ImageName = "packer-test-" + destImageName
images, _ := client.DescribeImages(describeImagesRequest)
image := images.Images.Image[0]
// Get snapshot ids for image
snapshotIds := []string{}
for _, device := range image.DiskDeviceMappings.DiskDeviceMapping {
if device.Device != "" && device.SnapshotId != "" {
snapshotIds = append(snapshotIds, device.SnapshotId)
}
}
builderT.Test(t, builderT.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
Builder: &Builder{},
Template: buildForceDeleteSnapshotConfig("true", destImageName),
Check: checkSnapshotsDeleted(snapshotIds),
})
}
func buildForceDeleteSnapshotConfig(val, name string) string {
return fmt.Sprintf(testBuilderAccForceDeleteSnapshot, val, val, name)
}
const testBuilderAccForceDeleteSnapshot = `
{
"builders": [{
"type": "test",
"region": "cn-beijing",
"instance_type": "ecs.n1.tiny",
"source_image":"ubuntu_18_04_64_20G_alibase_20190509.vhd",
"io_optimized":"true",
"ssh_username":"root",
"image_force_delete_snapshots": "%s",
"image_force_delete": "%s",
"image_name": "packer-test-%s"
}]
}
`
func checkSnapshotsDeleted(snapshotIds []string) builderT.TestCheckFunc {
return func(artifacts []packersdk.Artifact) error {
// Verify the snapshots are gone
client, _ := testAliyunClient()
data, err := json.Marshal(snapshotIds)
if err != nil {
return fmt.Errorf("Marshal snapshotIds array failed %v", err)
}
describeSnapshotsRequest := ecs.CreateDescribeSnapshotsRequest()
describeSnapshotsRequest.RegionId = "cn-beijing"
describeSnapshotsRequest.SnapshotIds = string(data)
snapshotResp, err := client.DescribeSnapshots(describeSnapshotsRequest)
if err != nil {
return fmt.Errorf("Query snapshot failed %v", err)
}
snapshots := snapshotResp.Snapshots.Snapshot
if len(snapshots) > 0 {
return fmt.Errorf("Snapshots weren't successfully deleted by " +
"`ecs_image_force_delete_snapshots`")
}
return nil
}
}
func TestBuilderAcc_imageTags(t *testing.T) {
t.Parallel()
builderT.Test(t, builderT.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
Builder: &Builder{},
Template: testBuilderAccImageTags,
Check: checkImageTags(),
})
}
const testBuilderAccImageTags = `
{ "builders": [{
"type": "test",
"region": "cn-beijing",
"instance_type": "ecs.n1.tiny",
"source_image":"ubuntu_18_04_64_20G_alibase_20190509.vhd",
"ssh_username": "root",
"io_optimized":"true",
"image_name": "packer-test-imageTags_{{timestamp}}",
"tags": {
"TagKey1": "TagValue1",
"TagKey2": "TagValue2"
}
}]
}`
func checkImageTags() builderT.TestCheckFunc {
return func(artifacts []packersdk.Artifact) error {
if len(artifacts) > 1 {
return fmt.Errorf("more than 1 artifact")
}
// Get the actual *Artifact pointer so we can access the AMIs directly
artifactRaw := artifacts[0]
artifact, ok := artifactRaw.(*Artifact)
if !ok {
return fmt.Errorf("unknown artifact: %#v", artifactRaw)
}
imageId := artifact.AlicloudImages[defaultTestRegion]
// describe the image, get block devices with a snapshot
client, _ := testAliyunClient()
describeImageTagsRequest := ecs.CreateDescribeTagsRequest()
describeImageTagsRequest.RegionId = defaultTestRegion
describeImageTagsRequest.ResourceType = TagResourceImage
describeImageTagsRequest.ResourceId = imageId
imageTagsResponse, err := client.DescribeTags(describeImageTagsRequest)
if err != nil {
return fmt.Errorf("Error retrieving Image Attributes for ECS Image Artifact (%#v) "+
"in ECS Image Tags Test: %s", artifact, err)
}
if len(imageTagsResponse.Tags.Tag) != 2 {
return fmt.Errorf("expect 2 tags set on image %s but got %d", imageId, len(imageTagsResponse.Tags.Tag))
}
for _, tag := range imageTagsResponse.Tags.Tag {
if tag.TagKey != "TagKey1" && tag.TagKey != "TagKey2" {
return fmt.Errorf("tags on image %s should be within the list of TagKey1 and TagKey2 but got %s", imageId, tag.TagKey)
}
if tag.TagKey == "TagKey1" && tag.TagValue != "TagValue1" {
return fmt.Errorf("the value for tag %s on image %s should be TagValue1 but got %s", tag.TagKey, imageId, tag.TagValue)
} else if tag.TagKey == "TagKey2" && tag.TagValue != "TagValue2" {
return fmt.Errorf("the value for tag %s on image %s should be TagValue2 but got %s", tag.TagKey, imageId, tag.TagValue)
}
}
describeImagesRequest := ecs.CreateDescribeImagesRequest()
describeImagesRequest.RegionId = defaultTestRegion
describeImagesRequest.ImageId = imageId
imagesResponse, err := client.DescribeImages(describeImagesRequest)
if err != nil {
return fmt.Errorf("describe images failed due to %s", err)
}
if len(imagesResponse.Images.Image) == 0 {
return fmt.Errorf("image %s generated can not be found", imageId)
}
image := imagesResponse.Images.Image[0]
for _, mapping := range image.DiskDeviceMappings.DiskDeviceMapping {
describeSnapshotTagsRequest := ecs.CreateDescribeTagsRequest()
describeSnapshotTagsRequest.RegionId = defaultTestRegion
describeSnapshotTagsRequest.ResourceType = TagResourceSnapshot
describeSnapshotTagsRequest.ResourceId = mapping.SnapshotId
snapshotTagsResponse, err := client.DescribeTags(describeSnapshotTagsRequest)
if err != nil {
return fmt.Errorf("failed to get snapshot tags due to %s", err)
}
if len(snapshotTagsResponse.Tags.Tag) != 2 {
return fmt.Errorf("expect 2 tags set on snapshot %s but got %d", mapping.SnapshotId, len(snapshotTagsResponse.Tags.Tag))
}
for _, tag := range snapshotTagsResponse.Tags.Tag {
if tag.TagKey != "TagKey1" && tag.TagKey != "TagKey2" {
return fmt.Errorf("tags on snapshot %s should be within the list of TagKey1 and TagKey2 but got %s", mapping.SnapshotId, tag.TagKey)
}
if tag.TagKey == "TagKey1" && tag.TagValue != "TagValue1" {
return fmt.Errorf("the value for tag %s on snapshot %s should be TagValue1 but got %s", tag.TagKey, mapping.SnapshotId, tag.TagValue)
} else if tag.TagKey == "TagKey2" && tag.TagValue != "TagValue2" {
return fmt.Errorf("the value for tag %s on snapshot %s should be TagValue2 but got %s", tag.TagKey, mapping.SnapshotId, tag.TagValue)
}
}
}
return nil
}
}
func TestBuilderAcc_dataDiskEncrypted(t *testing.T) {
t.Parallel()
builderT.Test(t, builderT.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
Builder: &Builder{},
Template: testBuilderAccDataDiskEncrypted,
Check: checkDataDiskEncrypted(),
})
}
const testBuilderAccDataDiskEncrypted = `
{ "builders": [{
"type": "test",
"region": "cn-beijing",
"instance_type": "ecs.n1.tiny",
"source_image":"ubuntu_18_04_64_20G_alibase_20190509.vhd",
"io_optimized":"true",
"ssh_username":"root",
"image_name": "packer-test-dataDiskEncrypted_{{timestamp}}",
"image_disk_mappings": [
{
"disk_name": "data_disk1",
"disk_size": 25,
"disk_encrypted": true,
"disk_delete_with_instance": true
},
{
"disk_name": "data_disk2",
"disk_size": 35,
"disk_encrypted": false,
"disk_delete_with_instance": true
},
{
"disk_name": "data_disk3",
"disk_size": 45,
"disk_delete_with_instance": true
}
]
}]
}`
func checkDataDiskEncrypted() builderT.TestCheckFunc {
return func(artifacts []packersdk.Artifact) error {
if len(artifacts) > 1 {
return fmt.Errorf("more than 1 artifact")
}
// Get the actual *Artifact pointer so we can access the AMIs directly
artifactRaw := artifacts[0]
artifact, ok := artifactRaw.(*Artifact)
if !ok {
return fmt.Errorf("unknown artifact: %#v", artifactRaw)
}
imageId := artifact.AlicloudImages[defaultTestRegion]
// describe the image, get block devices with a snapshot
client, _ := testAliyunClient()
describeImagesRequest := ecs.CreateDescribeImagesRequest()
describeImagesRequest.RegionId = defaultTestRegion
describeImagesRequest.ImageId = imageId
imagesResponse, err := client.DescribeImages(describeImagesRequest)
if err != nil {
return fmt.Errorf("describe images failed due to %s", err)
}
if len(imagesResponse.Images.Image) == 0 {
return fmt.Errorf("image %s generated can not be found", imageId)
}
image := imagesResponse.Images.Image[0]
var snapshotIds []string
for _, mapping := range image.DiskDeviceMappings.DiskDeviceMapping {
snapshotIds = append(snapshotIds, mapping.SnapshotId)
}
data, _ := json.Marshal(snapshotIds)
describeSnapshotRequest := ecs.CreateDescribeSnapshotsRequest()
describeSnapshotRequest.RegionId = defaultTestRegion
describeSnapshotRequest.SnapshotIds = string(data)
describeSnapshotsResponse, err := client.DescribeSnapshots(describeSnapshotRequest)
if err != nil {
return fmt.Errorf("describe data snapshots failed due to %s", err)
}
if len(describeSnapshotsResponse.Snapshots.Snapshot) != 4 {
return fmt.Errorf("expect %d data snapshots but got %d", len(snapshotIds), len(describeSnapshotsResponse.Snapshots.Snapshot))
}
snapshots := describeSnapshotsResponse.Snapshots.Snapshot
for _, snapshot := range snapshots {
if snapshot.SourceDiskType == DiskTypeSystem {
if snapshot.Encrypted != false {
return fmt.Errorf("the system snapshot expected to be non-encrypted but got true")
}
continue
}
if snapshot.SourceDiskSize == "25" && snapshot.Encrypted != true {
return fmt.Errorf("the first snapshot expected to be encrypted but got false")
}
if snapshot.SourceDiskSize == "35" && snapshot.Encrypted != false {
return fmt.Errorf("the second snapshot expected to be non-encrypted but got true")
}
if snapshot.SourceDiskSize == "45" && snapshot.Encrypted != false {
return fmt.Errorf("the third snapshot expected to be non-encrypted but got true")
}
}
return nil
}
}
func TestBuilderAcc_systemDiskEncrypted(t *testing.T) {
t.Parallel()
builderT.Test(t, builderT.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
Builder: &Builder{},
Template: testBuilderAccSystemDiskEncrypted,
Check: checkSystemDiskEncrypted(),
})
}
const testBuilderAccSystemDiskEncrypted = `
{
"builders": [{
"type": "test",
"region": "cn-beijing",
"instance_type": "ecs.n1.tiny",
"source_image":"ubuntu_18_04_64_20G_alibase_20190509.vhd",
"io_optimized":"true",
"ssh_username":"root",
"image_name": "packer-test_{{timestamp}}",
"image_encrypted": "true"
}]
}`
func checkSystemDiskEncrypted() builderT.TestCheckFunc {
return func(artifacts []packersdk.Artifact) error {
if len(artifacts) > 1 {
return fmt.Errorf("more than 1 artifact")
}
// Get the actual *Artifact pointer so we can access the AMIs directly
artifactRaw := artifacts[0]
artifact, ok := artifactRaw.(*Artifact)
if !ok {
return fmt.Errorf("unknown artifact: %#v", artifactRaw)
}
// describe the image, get block devices with a snapshot
client, _ := testAliyunClient()
imageId := artifact.AlicloudImages[defaultTestRegion]
describeImagesRequest := ecs.CreateDescribeImagesRequest()
describeImagesRequest.RegionId = defaultTestRegion
describeImagesRequest.ImageId = imageId
describeImagesRequest.Status = ImageStatusQueried
imagesResponse, err := client.DescribeImages(describeImagesRequest)
if err != nil {
return fmt.Errorf("describe images failed due to %s", err)
}
if len(imagesResponse.Images.Image) == 0 {
return fmt.Errorf("image %s generated can not be found", imageId)
}
image := imagesResponse.Images.Image[0]
if image.IsCopied == false {
return fmt.Errorf("image %s generated expexted to be copied but false", image.ImageId)
}
describeSnapshotRequest := ecs.CreateDescribeSnapshotsRequest()
describeSnapshotRequest.RegionId = defaultTestRegion
describeSnapshotRequest.SnapshotIds = fmt.Sprintf("[\"%s\"]", image.DiskDeviceMappings.DiskDeviceMapping[0].SnapshotId)
describeSnapshotsResponse, err := client.DescribeSnapshots(describeSnapshotRequest)
if err != nil {
return fmt.Errorf("describe system snapshots failed due to %s", err)
}
snapshots := describeSnapshotsResponse.Snapshots.Snapshot[0]
if snapshots.Encrypted != true {
return fmt.Errorf("system snapshot of image %s expected to be encrypted but got false", imageId)
}
return nil
}
}
func testAccPreCheck(t *testing.T) {
if v := os.Getenv("ALICLOUD_ACCESS_KEY"); v == "" {
t.Fatal("ALICLOUD_ACCESS_KEY must be set for acceptance tests")
}
if v := os.Getenv("ALICLOUD_SECRET_KEY"); v == "" {
t.Fatal("ALICLOUD_SECRET_KEY must be set for acceptance tests")
}
}
func testAliyunClient() (*ClientWrapper, error) {
access := &AlicloudAccessConfig{AlicloudRegion: "cn-beijing"}
err := access.Config()
if err != nil {
return nil, err
}
client, err := access.Client()
if err != nil {
return nil, err
}
return client, nil
}

View File

@ -1,237 +0,0 @@
package ecs
import (
"reflect"
"testing"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
helperconfig "github.com/hashicorp/packer-plugin-sdk/template/config"
)
func testBuilderConfig() map[string]interface{} {
return map[string]interface{}{
"access_key": "foo",
"secret_key": "bar",
"source_image": "foo",
"instance_type": "ecs.n1.tiny",
"region": "cn-beijing",
"ssh_username": "root",
"image_name": "foo",
"io_optimized": true,
}
}
func TestBuilder_ImplementsBuilder(t *testing.T) {
var raw interface{}
raw = &Builder{}
if _, ok := raw.(packersdk.Builder); !ok {
t.Fatalf("Builder should be a builder")
}
}
func TestBuilder_Prepare_BadType(t *testing.T) {
b := &Builder{}
c := map[string]interface{}{
"access_key": []string{},
}
_, warnings, err := b.Prepare(c)
if len(warnings) > 0 {
t.Fatalf("bad: %#v", warnings)
}
if err == nil {
t.Fatalf("prepare should fail")
}
}
func TestBuilderPrepare_ECSImageName(t *testing.T) {
var b Builder
config := testBuilderConfig()
// Test good
config["image_name"] = "ecs.n1.tiny"
_, warnings, err := b.Prepare(config)
if len(warnings) > 0 {
t.Fatalf("bad: %#v", warnings)
}
if err != nil {
t.Fatalf("should not have error: %s", err)
}
// Test bad
config["ecs_image_name"] = "foo {{"
b = Builder{}
_, warnings, err = b.Prepare(config)
if len(warnings) > 0 {
t.Fatalf("bad: %#v", warnings)
}
if err == nil {
t.Fatal("should have error")
}
// Test bad
delete(config, "image_name")
b = Builder{}
_, warnings, err = b.Prepare(config)
if len(warnings) > 0 {
t.Fatalf("bad: %#v", warnings)
}
if err == nil {
t.Fatal("should have error")
}
}
func TestBuilderPrepare_InvalidKey(t *testing.T) {
var b Builder
config := testBuilderConfig()
// Add a random key
config["i_should_not_be_valid"] = true
_, warnings, err := b.Prepare(config)
if len(warnings) > 0 {
t.Fatalf("bad: %#v", warnings)
}
if err == nil {
t.Fatal("should have error")
}
}
func TestBuilderPrepare_Devices(t *testing.T) {
var b Builder
config := testBuilderConfig()
config["system_disk_mapping"] = map[string]interface{}{
"disk_category": "cloud",
"disk_description": "system disk",
"disk_name": "system_disk",
"disk_size": 60,
}
config["image_disk_mappings"] = []map[string]interface{}{
{
"disk_category": "cloud_efficiency",
"disk_name": "data_disk1",
"disk_size": 100,
"disk_snapshot_id": "s-1",
"disk_description": "data disk1",
"disk_device": "/dev/xvdb",
"disk_delete_with_instance": false,
},
{
"disk_name": "data_disk2",
"disk_device": "/dev/xvdc",
},
}
_, warnings, err := b.Prepare(config)
if len(warnings) > 0 {
t.Fatalf("bad: %#v", warnings)
}
if err != nil {
t.Fatalf("should not have error: %s", err)
}
expected := AlicloudDiskDevice{
DiskCategory: "cloud",
Description: "system disk",
DiskName: "system_disk",
DiskSize: 60,
Encrypted: helperconfig.TriUnset,
}
if !reflect.DeepEqual(b.config.ECSSystemDiskMapping, expected) {
t.Fatalf("system disk is not set properly, actual: %v; expected: %v", b.config.ECSSystemDiskMapping, expected)
}
if !reflect.DeepEqual(b.config.ECSImagesDiskMappings, []AlicloudDiskDevice{
{
DiskCategory: "cloud_efficiency",
DiskName: "data_disk1",
DiskSize: 100,
SnapshotId: "s-1",
Description: "data disk1",
Device: "/dev/xvdb",
DeleteWithInstance: false,
},
{
DiskName: "data_disk2",
Device: "/dev/xvdc",
},
}) {
t.Fatalf("data disks are not set properly, actual: %#v", b.config.ECSImagesDiskMappings)
}
}
func TestBuilderPrepare_IgnoreDataDisks(t *testing.T) {
var b Builder
config := testBuilderConfig()
_, warnings, err := b.Prepare(config)
if len(warnings) > 0 {
t.Fatalf("bad: %#v", warnings)
}
if err != nil {
t.Fatalf("should not have error: %s", err)
}
if b.config.AlicloudImageIgnoreDataDisks != false {
t.Fatalf("image_ignore_data_disks is not set properly, expect: %t, actual: %t", false, b.config.AlicloudImageIgnoreDataDisks)
}
config["image_ignore_data_disks"] = "false"
_, warnings, err = b.Prepare(config)
if len(warnings) > 0 {
t.Fatalf("bad: %#v", warnings)
}
if err != nil {
t.Fatalf("should not have error: %s", err)
}
if b.config.AlicloudImageIgnoreDataDisks != false {
t.Fatalf("image_ignore_data_disks is not set properly, expect: %t, actual: %t", false, b.config.AlicloudImageIgnoreDataDisks)
}
config["image_ignore_data_disks"] = "true"
_, warnings, err = b.Prepare(config)
if len(warnings) > 0 {
t.Fatalf("bad: %#v", warnings)
}
if err != nil {
t.Fatalf("should not have error: %s", err)
}
if b.config.AlicloudImageIgnoreDataDisks != true {
t.Fatalf("image_ignore_data_disks is not set properly, expect: %t, actual: %t", true, b.config.AlicloudImageIgnoreDataDisks)
}
}
func TestBuilderPrepare_WaitSnapshotReadyTimeout(t *testing.T) {
var b Builder
config := testBuilderConfig()
_, warnings, err := b.Prepare(config)
if len(warnings) > 0 {
t.Fatalf("bad: %#v", warnings)
}
if err != nil {
t.Fatalf("should not have error: %s", err)
}
if b.config.WaitSnapshotReadyTimeout != 0 {
t.Fatalf("wait_snapshot_ready_timeout is not set properly, expect: %d, actual: %d", 0, b.config.WaitSnapshotReadyTimeout)
}
if b.getSnapshotReadyTimeout() != ALICLOUD_DEFAULT_LONG_TIMEOUT {
t.Fatalf("default timeout is not set properly, expect: %d, actual: %d", ALICLOUD_DEFAULT_LONG_TIMEOUT, b.getSnapshotReadyTimeout())
}
config["wait_snapshot_ready_timeout"] = ALICLOUD_DEFAULT_TIMEOUT
_, warnings, err = b.Prepare(config)
if len(warnings) > 0 {
t.Fatalf("bad: %#v", warnings)
}
if err != nil {
t.Fatalf("should not have error: %s", err)
}
if b.config.WaitSnapshotReadyTimeout != ALICLOUD_DEFAULT_TIMEOUT {
t.Fatalf("wait_snapshot_ready_timeout is not set properly, expect: %d, actual: %d", ALICLOUD_DEFAULT_TIMEOUT, b.config.WaitSnapshotReadyTimeout)
}
if b.getSnapshotReadyTimeout() != ALICLOUD_DEFAULT_TIMEOUT {
t.Fatalf("default timeout is not set properly, expect: %d, actual: %d", ALICLOUD_DEFAULT_TIMEOUT, b.getSnapshotReadyTimeout())
}
}

View File

@ -1,310 +0,0 @@
package ecs
import (
"fmt"
"time"
"github.com/aliyun/alibaba-cloud-sdk-go/sdk/errors"
"github.com/aliyun/alibaba-cloud-sdk-go/sdk/responses"
"github.com/aliyun/alibaba-cloud-sdk-go/services/ecs"
)
type ClientWrapper struct {
*ecs.Client
}
const (
InstanceStatusRunning = "Running"
InstanceStatusStarting = "Starting"
InstanceStatusStopped = "Stopped"
InstanceStatusStopping = "Stopping"
)
const (
ImageStatusWaiting = "Waiting"
ImageStatusCreating = "Creating"
ImageStatusCreateFailed = "CreateFailed"
ImageStatusAvailable = "Available"
)
var ImageStatusQueried = fmt.Sprintf("%s,%s,%s,%s", ImageStatusWaiting, ImageStatusCreating, ImageStatusCreateFailed, ImageStatusAvailable)
const (
SnapshotStatusAll = "all"
SnapshotStatusProgressing = "progressing"
SnapshotStatusAccomplished = "accomplished"
SnapshotStatusFailed = "failed"
)
const (
DiskStatusInUse = "In_use"
DiskStatusAvailable = "Available"
DiskStatusAttaching = "Attaching"
DiskStatusDetaching = "Detaching"
DiskStatusCreating = "Creating"
DiskStatusReIniting = "ReIniting"
)
const (
VpcStatusPending = "Pending"
VpcStatusAvailable = "Available"
)
const (
VSwitchStatusPending = "Pending"
VSwitchStatusAvailable = "Available"
)
const (
EipStatusAssociating = "Associating"
EipStatusUnassociating = "Unassociating"
EipStatusInUse = "InUse"
EipStatusAvailable = "Available"
)
const (
ImageOwnerSystem = "system"
ImageOwnerSelf = "self"
ImageOwnerOthers = "others"
ImageOwnerMarketplace = "marketplace"
)
const (
IOOptimizedNone = "none"
IOOptimizedOptimized = "optimized"
)
const (
InstanceNetworkClassic = "classic"
InstanceNetworkVpc = "vpc"
)
const (
DiskTypeSystem = "system"
DiskTypeData = "data"
)
const (
TagResourceImage = "image"
TagResourceInstance = "instance"
TagResourceSnapshot = "snapshot"
TagResourceDisk = "disk"
)
const (
IpProtocolAll = "all"
IpProtocolTCP = "tcp"
IpProtocolUDP = "udp"
IpProtocolICMP = "icmp"
IpProtocolGRE = "gre"
)
const (
NicTypeInternet = "internet"
NicTypeIntranet = "intranet"
)
const (
DefaultPortRange = "-1/-1"
DefaultCidrIp = "0.0.0.0/0"
DefaultCidrBlock = "172.16.0.0/24"
)
const (
defaultRetryInterval = 5 * time.Second
defaultRetryTimes = 12
shortRetryTimes = 36
mediumRetryTimes = 360
longRetryTimes = 720
)
type WaitForExpectEvalResult struct {
evalPass bool
stopRetry bool
}
var (
WaitForExpectSuccess = WaitForExpectEvalResult{
evalPass: true,
stopRetry: true,
}
WaitForExpectToRetry = WaitForExpectEvalResult{
evalPass: false,
stopRetry: false,
}
WaitForExpectFailToStop = WaitForExpectEvalResult{
evalPass: false,
stopRetry: true,
}
)
type WaitForExpectArgs struct {
RequestFunc func() (responses.AcsResponse, error)
EvalFunc func(response responses.AcsResponse, err error) WaitForExpectEvalResult
RetryInterval time.Duration
RetryTimes int
RetryTimeout time.Duration
}
func (c *ClientWrapper) WaitForExpected(args *WaitForExpectArgs) (responses.AcsResponse, error) {
if args.RetryInterval <= 0 {
args.RetryInterval = defaultRetryInterval
}
if args.RetryTimes <= 0 {
args.RetryTimes = defaultRetryTimes
}
var timeoutPoint time.Time
if args.RetryTimeout > 0 {
timeoutPoint = time.Now().Add(args.RetryTimeout)
}
var lastResponse responses.AcsResponse
var lastError error
for i := 0; ; i++ {
if args.RetryTimeout > 0 && time.Now().After(timeoutPoint) {
break
}
if args.RetryTimeout <= 0 && i >= args.RetryTimes {
break
}
response, err := args.RequestFunc()
lastResponse = response
lastError = err
evalResult := args.EvalFunc(response, err)
if evalResult.evalPass {
return response, nil
}
if evalResult.stopRetry {
return response, err
}
time.Sleep(args.RetryInterval)
}
if lastError == nil {
lastError = fmt.Errorf("<no error>")
}
if args.RetryTimeout > 0 {
return lastResponse, fmt.Errorf("evaluate failed after %d seconds timeout with %d seconds retry interval: %s", int(args.RetryTimeout.Seconds()), int(args.RetryInterval.Seconds()), lastError)
}
return lastResponse, fmt.Errorf("evaluate failed after %d times retry with %d seconds retry interval: %s", args.RetryTimes, int(args.RetryInterval.Seconds()), lastError)
}
func (c *ClientWrapper) WaitForInstanceStatus(regionId string, instanceId string, expectedStatus string) (responses.AcsResponse, error) {
return c.WaitForExpected(&WaitForExpectArgs{
RequestFunc: func() (responses.AcsResponse, error) {
request := ecs.CreateDescribeInstancesRequest()
request.RegionId = regionId
request.InstanceIds = fmt.Sprintf("[\"%s\"]", instanceId)
return c.DescribeInstances(request)
},
EvalFunc: func(response responses.AcsResponse, err error) WaitForExpectEvalResult {
if err != nil {
return WaitForExpectToRetry
}
instancesResponse := response.(*ecs.DescribeInstancesResponse)
instances := instancesResponse.Instances.Instance
for _, instance := range instances {
if instance.Status == expectedStatus {
return WaitForExpectSuccess
}
}
return WaitForExpectToRetry
},
RetryTimes: mediumRetryTimes,
})
}
func (c *ClientWrapper) WaitForImageStatus(regionId string, imageId string, expectedStatus string, timeout time.Duration) (responses.AcsResponse, error) {
return c.WaitForExpected(&WaitForExpectArgs{
RequestFunc: func() (responses.AcsResponse, error) {
request := ecs.CreateDescribeImagesRequest()
request.RegionId = regionId
request.ImageId = imageId
request.Status = ImageStatusQueried
return c.DescribeImages(request)
},
EvalFunc: func(response responses.AcsResponse, err error) WaitForExpectEvalResult {
if err != nil {
return WaitForExpectToRetry
}
imagesResponse := response.(*ecs.DescribeImagesResponse)
images := imagesResponse.Images.Image
for _, image := range images {
if image.Status == expectedStatus {
return WaitForExpectSuccess
}
}
return WaitForExpectToRetry
},
RetryTimeout: timeout,
})
}
func (c *ClientWrapper) WaitForSnapshotStatus(regionId string, snapshotId string, expectedStatus string, timeout time.Duration) (responses.AcsResponse, error) {
return c.WaitForExpected(&WaitForExpectArgs{
RequestFunc: func() (responses.AcsResponse, error) {
request := ecs.CreateDescribeSnapshotsRequest()
request.RegionId = regionId
request.SnapshotIds = fmt.Sprintf("[\"%s\"]", snapshotId)
return c.DescribeSnapshots(request)
},
EvalFunc: func(response responses.AcsResponse, err error) WaitForExpectEvalResult {
if err != nil {
return WaitForExpectToRetry
}
snapshotsResponse := response.(*ecs.DescribeSnapshotsResponse)
snapshots := snapshotsResponse.Snapshots.Snapshot
for _, snapshot := range snapshots {
if snapshot.Status == expectedStatus {
return WaitForExpectSuccess
}
}
return WaitForExpectToRetry
},
RetryTimeout: timeout,
})
}
type EvalErrorType bool
const (
EvalRetryErrorType = EvalErrorType(true)
EvalNotRetryErrorType = EvalErrorType(false)
)
func (c *ClientWrapper) EvalCouldRetryResponse(evalErrors []string, evalErrorType EvalErrorType) func(response responses.AcsResponse, err error) WaitForExpectEvalResult {
return func(response responses.AcsResponse, err error) WaitForExpectEvalResult {
if err == nil {
return WaitForExpectSuccess
}
e, ok := err.(errors.Error)
if !ok {
return WaitForExpectToRetry
}
if evalErrorType == EvalRetryErrorType && !ContainsInArray(evalErrors, e.ErrorCode()) {
return WaitForExpectFailToStop
}
if evalErrorType == EvalNotRetryErrorType && ContainsInArray(evalErrors, e.ErrorCode()) {
return WaitForExpectFailToStop
}
return WaitForExpectToRetry
}
}

View File

@ -1,80 +0,0 @@
package ecs
import (
"fmt"
"testing"
"time"
"github.com/aliyun/alibaba-cloud-sdk-go/sdk/responses"
)
func TestWaitForExpectedExceedRetryTimes(t *testing.T) {
c := ClientWrapper{}
iter := 0
waitDone := make(chan bool, 1)
go func() {
_, _ = c.WaitForExpected(&WaitForExpectArgs{
RequestFunc: func() (responses.AcsResponse, error) {
iter++
return nil, fmt.Errorf("test: let iteration %d failed", iter)
},
EvalFunc: func(response responses.AcsResponse, err error) WaitForExpectEvalResult {
if err != nil {
fmt.Printf("need retry: %s\n", err)
return WaitForExpectToRetry
}
return WaitForExpectSuccess
},
})
waitDone <- true
}()
select {
case <-waitDone:
if iter != defaultRetryTimes {
t.Fatalf("WaitForExpected should terminate at the %d iterations", defaultRetryTimes)
}
}
}
func TestWaitForExpectedExceedRetryTimeout(t *testing.T) {
c := ClientWrapper{}
expectTimeout := 10 * time.Second
iter := 0
waitDone := make(chan bool, 1)
go func() {
_, _ = c.WaitForExpected(&WaitForExpectArgs{
RequestFunc: func() (responses.AcsResponse, error) {
iter++
return nil, fmt.Errorf("test: let iteration %d failed", iter)
},
EvalFunc: func(response responses.AcsResponse, err error) WaitForExpectEvalResult {
if err != nil {
fmt.Printf("need retry: %s\n", err)
return WaitForExpectToRetry
}
return WaitForExpectSuccess
},
RetryTimeout: expectTimeout,
})
waitDone <- true
}()
timeTolerance := 1 * time.Second
select {
case <-waitDone:
if iter > int(expectTimeout/defaultRetryInterval) {
t.Fatalf("WaitForExpected should terminate before the %d iterations", int(expectTimeout/defaultRetryInterval))
}
case <-time.After(expectTimeout + timeTolerance):
t.Fatalf("WaitForExpected should terminate within %f seconds", (expectTimeout + timeTolerance).Seconds())
}
}

View File

@ -1,197 +0,0 @@
//go:generate struct-markdown
package ecs
import (
"fmt"
"regexp"
"strings"
"github.com/hashicorp/packer-plugin-sdk/template/config"
"github.com/hashicorp/packer-plugin-sdk/template/interpolate"
)
// The "AlicloudDiskDevice" object us used for the `ECSSystemDiskMapping` and
// `ECSImagesDiskMappings` options, and contains the following fields:
type AlicloudDiskDevice struct {
// The value of disk name is blank by default. [2,
// 128] English or Chinese characters, must begin with an
// uppercase/lowercase letter or Chinese character. Can contain numbers,
// ., _ and -. The disk name will appear on the console. It cannot
// begin with `http://` or `https://`.
DiskName string `mapstructure:"disk_name" required:"false"`
// Category of the system disk. Optional values are:
// - cloud - general cloud disk
// - cloud_efficiency - efficiency cloud disk
// - cloud_ssd - cloud SSD
DiskCategory string `mapstructure:"disk_category" required:"false"`
// Size of the system disk, measured in GiB. Value
// range: [20, 500]. The specified value must be equal to or greater
// than max{20, ImageSize}. Default value: max{40, ImageSize}.
DiskSize int `mapstructure:"disk_size" required:"false"`
// Snapshots are used to create the data
// disk After this parameter is specified, Size is ignored. The actual
// size of the created disk is the size of the specified snapshot.
// This field is only used in the ECSImagesDiskMappings option, not
// the ECSSystemDiskMapping option.
SnapshotId string `mapstructure:"disk_snapshot_id" required:"false"`
// The value of disk description is blank by
// default. [2, 256] characters. The disk description will appear on the
// console. It cannot begin with `http://` or `https://`.
Description string `mapstructure:"disk_description" required:"false"`
// Whether or not the disk is
// released along with the instance:
DeleteWithInstance bool `mapstructure:"disk_delete_with_instance" required:"false"`
// Device information of the related instance:
// such as /dev/xvdb It is null unless the Status is In_use.
Device string `mapstructure:"disk_device" required:"false"`
// Whether or not to encrypt the data disk.
// If this option is set to true, the data disk will be encryped and
// corresponding snapshot in the target image will also be encrypted. By
// default, if this is an extra data disk, Packer will not encrypt the
// data disk. Otherwise, Packer will keep the encryption setting to what
// it was in the source image. Please refer to Introduction of ECS disk
// encryption for more details.
Encrypted config.Trilean `mapstructure:"disk_encrypted" required:"false"`
}
// The "AlicloudDiskDevices" object is used to define disk mappings for your
// instance.
type AlicloudDiskDevices struct {
// Image disk mapping for the system disk.
// See the [disk device configuration](#disk-devices-configuration) section
// for more information on options.
// Usage example:
//
// ```json
// "builders": [{
// "type":"alicloud-ecs",
// "system_disk_mapping": {
// "disk_size": 50,
// "disk_name": "mydisk"
// },
// ...
// }
// ```
ECSSystemDiskMapping AlicloudDiskDevice `mapstructure:"system_disk_mapping" required:"false"`
// Add one or more data disks to the image.
// See the [disk device configuration](#disk-devices-configuration) section
// for more information on options.
// Usage example:
//
// ```json
// "builders": [{
// "type":"alicloud-ecs",
// "image_disk_mappings": [
// {
// "disk_snapshot_id": "someid",
// "disk_device": "dev/xvdb"
// }
// ],
// ...
// }
// ```
ECSImagesDiskMappings []AlicloudDiskDevice `mapstructure:"image_disk_mappings" required:"false"`
}
type AlicloudImageConfig struct {
// The name of the user-defined image, [2, 128] English or Chinese
// characters. It must begin with an uppercase/lowercase letter or a
// Chinese character, and may contain numbers, `_` or `-`. It cannot begin
// with `http://` or `https://`.
AlicloudImageName string `mapstructure:"image_name" required:"true"`
// The version number of the image, with a length limit of 1 to 40 English
// characters.
AlicloudImageVersion string `mapstructure:"image_version" required:"false"`
// The description of the image, with a length limit of 0 to 256
// characters. Leaving it blank means null, which is the default value. It
// cannot begin with `http://` or `https://`.
AlicloudImageDescription string `mapstructure:"image_description" required:"false"`
// The IDs of to-be-added Aliyun accounts to which the image is shared. The
// number of accounts is 1 to 10. If number of accounts is greater than 10,
// this parameter is ignored.
AlicloudImageShareAccounts []string `mapstructure:"image_share_account" required:"false"`
AlicloudImageUNShareAccounts []string `mapstructure:"image_unshare_account"`
// Copy to the destination regionIds.
AlicloudImageDestinationRegions []string `mapstructure:"image_copy_regions" required:"false"`
// The name of the destination image, [2, 128] English or Chinese
// characters. It must begin with an uppercase/lowercase letter or a
// Chinese character, and may contain numbers, _ or -. It cannot begin with
// `http://` or `https://`.
AlicloudImageDestinationNames []string `mapstructure:"image_copy_names" required:"false"`
// Whether or not to encrypt the target images, including those
// copied if image_copy_regions is specified. If this option is set to
// true, a temporary image will be created from the provisioned instance in
// the main region and an encrypted copy will be generated in the same
// region. By default, Packer will keep the encryption setting to what it
// was in the source image.
ImageEncrypted config.Trilean `mapstructure:"image_encrypted" required:"false"`
// If this value is true, when the target image names including those
// copied are duplicated with existing images, it will delete the existing
// images and then create the target images, otherwise, the creation will
// fail. The default value is false. Check `image_name` and
// `image_copy_names` options for names of target images. If
// [-force](/docs/commands/build#force) option is provided in `build`
// command, this option can be omitted and taken as true.
AlicloudImageForceDelete bool `mapstructure:"image_force_delete" required:"false"`
// If this value is true, when delete the duplicated existing images, the
// source snapshots of those images will be delete either. If
// [-force](/docs/commands/build#force) option is provided in `build`
// command, this option can be omitted and taken as true.
AlicloudImageForceDeleteSnapshots bool `mapstructure:"image_force_delete_snapshots" required:"false"`
AlicloudImageForceDeleteInstances bool `mapstructure:"image_force_delete_instances"`
// If this value is true, the image created will not include any snapshot
// of data disks. This option would be useful for any circumstance that
// default data disks with instance types are not concerned. The default
// value is false.
AlicloudImageIgnoreDataDisks bool `mapstructure:"image_ignore_data_disks" required:"false"`
// The region validation can be skipped if this value is true, the default
// value is false.
AlicloudImageSkipRegionValidation bool `mapstructure:"skip_region_validation" required:"false"`
// Key/value pair tags applied to the destination image and relevant
// snapshots.
AlicloudImageTags map[string]string `mapstructure:"tags" required:"false"`
// Same as [`tags`](#tags) but defined as a singular repeatable block
// containing a `key` and a `value` field. In HCL2 mode the
// [`dynamic_block`](/docs/templates/hcl_templates/expressions#dynamic-blocks)
// will allow you to create those programatically.
AlicloudImageTag config.KeyValues `mapstructure:"tag" required:"false"`
AlicloudDiskDevices `mapstructure:",squash"`
}
func (c *AlicloudImageConfig) Prepare(ctx *interpolate.Context) []error {
var errs []error
errs = append(errs, c.AlicloudImageTag.CopyOn(&c.AlicloudImageTags)...)
if c.AlicloudImageName == "" {
errs = append(errs, fmt.Errorf("image_name must be specified"))
} else if len(c.AlicloudImageName) < 2 || len(c.AlicloudImageName) > 128 {
errs = append(errs, fmt.Errorf("image_name must less than 128 letters and more than 1 letters"))
} else if strings.HasPrefix(c.AlicloudImageName, "http://") ||
strings.HasPrefix(c.AlicloudImageName, "https://") {
errs = append(errs, fmt.Errorf("image_name can't start with 'http://' or 'https://'"))
}
reg := regexp.MustCompile(`\s+`)
if reg.FindString(c.AlicloudImageName) != "" {
errs = append(errs, fmt.Errorf("image_name can't include spaces"))
}
if len(c.AlicloudImageDestinationRegions) > 0 {
regionSet := make(map[string]struct{})
regions := make([]string, 0, len(c.AlicloudImageDestinationRegions))
for _, region := range c.AlicloudImageDestinationRegions {
// If we already saw the region, then don't look again
if _, ok := regionSet[region]; ok {
continue
}
// Mark that we saw the region
regionSet[region] = struct{}{}
regions = append(regions, region)
}
c.AlicloudImageDestinationRegions = regions
}
return errs
}

View File

@ -1,61 +0,0 @@
package ecs
import (
"testing"
)
func testAlicloudImageConfig() *AlicloudImageConfig {
return &AlicloudImageConfig{
AlicloudImageName: "foo",
}
}
func TestECSImageConfigPrepare_name(t *testing.T) {
c := testAlicloudImageConfig()
if err := c.Prepare(nil); err != nil {
t.Fatalf("shouldn't have err: %s", err)
}
c.AlicloudImageName = ""
if err := c.Prepare(nil); err == nil {
t.Fatal("should have error")
}
}
func TestAMIConfigPrepare_regions(t *testing.T) {
c := testAlicloudImageConfig()
c.AlicloudImageDestinationRegions = nil
if err := c.Prepare(nil); err != nil {
t.Fatalf("shouldn't have err: %s", err)
}
c.AlicloudImageDestinationRegions = []string{"cn-beijing", "cn-hangzhou", "eu-central-1"}
if err := c.Prepare(nil); err != nil {
t.Fatalf("bad: %s", err)
}
c.AlicloudImageDestinationRegions = nil
c.AlicloudImageSkipRegionValidation = true
if err := c.Prepare(nil); err != nil {
t.Fatal("shouldn't have error")
}
c.AlicloudImageSkipRegionValidation = false
}
func TestECSImageConfigPrepare_imageTags(t *testing.T) {
c := testAlicloudImageConfig()
c.AlicloudImageTags = map[string]string{
"TagKey1": "TagValue1",
"TagKey2": "TagValue2",
}
if err := c.Prepare(nil); len(err) != 0 {
t.Fatalf("err: %s", err)
}
if len(c.AlicloudImageTags) != 2 || c.AlicloudImageTags["TagKey1"] != "TagValue1" ||
c.AlicloudImageTags["TagKey2"] != "TagValue2" {
t.Fatalf("invalid value, expected: %s, actual: %s", map[string]string{
"TagKey1": "TagValue1",
"TagKey2": "TagValue2",
}, c.AlicloudImageTags)
}
}

View File

@ -1,52 +0,0 @@
package ecs
import (
"fmt"
"strconv"
"github.com/hashicorp/packer-plugin-sdk/multistep"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
)
func cleanUpMessage(state multistep.StateBag, module string) {
_, cancelled := state.GetOk(multistep.StateCancelled)
_, halted := state.GetOk(multistep.StateHalted)
ui := state.Get("ui").(packersdk.Ui)
if cancelled || halted {
ui.Say(fmt.Sprintf("Deleting %s because of cancellation or error...", module))
} else {
ui.Say(fmt.Sprintf("Cleaning up '%s'", module))
}
}
func halt(state multistep.StateBag, err error, prefix string) multistep.StepAction {
ui := state.Get("ui").(packersdk.Ui)
if prefix != "" {
err = fmt.Errorf("%s: %s", prefix, err)
}
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
func convertNumber(value int) string {
if value <= 0 {
return ""
}
return strconv.Itoa(value)
}
func ContainsInArray(arr []string, value string) bool {
for _, item := range arr {
if item == value {
return true
}
}
return false
}

View File

@ -1,156 +0,0 @@
//go:generate struct-markdown
package ecs
import (
"errors"
"fmt"
"os"
"strings"
"github.com/hashicorp/packer-plugin-sdk/communicator"
"github.com/hashicorp/packer-plugin-sdk/template/config"
"github.com/hashicorp/packer-plugin-sdk/template/interpolate"
"github.com/hashicorp/packer-plugin-sdk/uuid"
)
type RunConfig struct {
AssociatePublicIpAddress bool `mapstructure:"associate_public_ip_address"`
// ID of the zone to which the disk belongs.
ZoneId string `mapstructure:"zone_id" required:"false"`
// Whether an ECS instance is I/O optimized or not. If this option is not
// provided, the value will be determined by product API according to what
// `instance_type` is used.
IOOptimized config.Trilean `mapstructure:"io_optimized" required:"false"`
// Type of the instance. For values, see [Instance Type
// Table](https://www.alibabacloud.com/help/doc-detail/25378.htm?spm=a3c0i.o25499en.a3.9.14a36ac8iYqKRA).
// You can also obtain the latest instance type table by invoking the
// [Querying Instance Type
// Table](https://intl.aliyun.com/help/doc-detail/25620.htm?spm=a3c0i.o25499en.a3.6.Dr1bik)
// interface.
InstanceType string `mapstructure:"instance_type" required:"true"`
Description string `mapstructure:"description"`
// This is the base image id which you want to
// create your customized images.
AlicloudSourceImage string `mapstructure:"source_image" required:"true"`
// Whether to force shutdown upon device
// restart. The default value is `false`.
//
// If it is set to `false`, the system is shut down normally; if it is set to
// `true`, the system is forced to shut down.
ForceStopInstance bool `mapstructure:"force_stop_instance" required:"false"`
// If this option is set to true, Packer
// will not stop the instance for you, and you need to make sure the instance
// will be stopped in the final provisioner command. Otherwise, Packer will
// timeout while waiting the instance to be stopped. This option is provided
// for some specific scenarios that you want to stop the instance by yourself.
// E.g., Sysprep a windows which may shutdown the instance within its command.
// The default value is false.
DisableStopInstance bool `mapstructure:"disable_stop_instance" required:"false"`
// Ram Role to apply when launching the instance.
RamRoleName string `mapstructure:"ram_role_name" required:"false"`
// ID of the security group to which a newly
// created instance belongs. Mutual access is allowed between instances in one
// security group. If not specified, the newly created instance will be added
// to the default security group. If the default group doesnt exist, or the
// number of instances in it has reached the maximum limit, a new security
// group will be created automatically.
SecurityGroupId string `mapstructure:"security_group_id" required:"false"`
// The security group name. The default value
// is blank. [2, 128] English or Chinese characters, must begin with an
// uppercase/lowercase letter or Chinese character. Can contain numbers, .,
// _ or -. It cannot begin with `http://` or `https://`.
SecurityGroupName string `mapstructure:"security_group_name" required:"false"`
// User data to apply when launching the instance. Note
// that you need to be careful about escaping characters due to the templates
// being JSON. It is often more convenient to use user_data_file, instead.
// Packer will not automatically wait for a user script to finish before
// shutting down the instance this must be handled in a provisioner.
UserData string `mapstructure:"user_data" required:"false"`
// Path to a file that will be used for the user
// data when launching the instance.
UserDataFile string `mapstructure:"user_data_file" required:"false"`
// VPC ID allocated by the system.
VpcId string `mapstructure:"vpc_id" required:"false"`
// The VPC name. The default value is blank. [2, 128]
// English or Chinese characters, must begin with an uppercase/lowercase
// letter or Chinese character. Can contain numbers, _ and -. The disk
// description will appear on the console. Cannot begin with `http://` or
// `https://`.
VpcName string `mapstructure:"vpc_name" required:"false"`
// Value options: 192.168.0.0/16 and
// 172.16.0.0/16. When not specified, the default value is 172.16.0.0/16.
CidrBlock string `mapstructure:"vpc_cidr_block" required:"false"`
// The ID of the VSwitch to be used.
VSwitchId string `mapstructure:"vswitch_id" required:"false"`
// The ID of the VSwitch to be used.
VSwitchName string `mapstructure:"vswitch_name" required:"false"`
// Display name of the instance, which is a string of 2 to 128 Chinese or
// English characters. It must begin with an uppercase/lowercase letter or
// a Chinese character and can contain numerals, `.`, `_`, or `-`. The
// instance name is displayed on the Alibaba Cloud console. If this
// parameter is not specified, the default value is InstanceId of the
// instance. It cannot begin with `http://` or `https://`.
InstanceName string `mapstructure:"instance_name" required:"false"`
// Internet charge type, which can be
// `PayByTraffic` or `PayByBandwidth`. Optional values:
// - `PayByBandwidth`
// - `PayByTraffic`
//
// If this parameter is not specified, the default value is `PayByBandwidth`.
// For the regions out of China, currently only support `PayByTraffic`, you
// must set it manfully.
InternetChargeType string `mapstructure:"internet_charge_type" required:"false"`
// Maximum outgoing bandwidth to the
// public network, measured in Mbps (Mega bits per second).
//
// Value range:
// - `PayByBandwidth`: \[0, 100\]. If this parameter is not specified, API
// automatically sets it to 0 Mbps.
// - `PayByTraffic`: \[1, 100\]. If this parameter is not specified, an
// error is returned.
InternetMaxBandwidthOut int `mapstructure:"internet_max_bandwidth_out" required:"false"`
// Timeout of creating snapshot(s).
// The default timeout is 3600 seconds if this option is not set or is set
// to 0. For those disks containing lots of data, it may require a higher
// timeout value.
WaitSnapshotReadyTimeout int `mapstructure:"wait_snapshot_ready_timeout" required:"false"`
// Communicator settings
Comm communicator.Config `mapstructure:",squash"`
// If this value is true, packer will connect to
// the ECS created through private ip instead of allocating a public ip or an
// EIP. The default value is false.
SSHPrivateIp bool `mapstructure:"ssh_private_ip" required:"false"`
}
func (c *RunConfig) Prepare(ctx *interpolate.Context) []error {
if c.Comm.SSHKeyPairName == "" && c.Comm.SSHTemporaryKeyPairName == "" &&
c.Comm.SSHPrivateKeyFile == "" && c.Comm.SSHPassword == "" && c.Comm.WinRMPassword == "" {
c.Comm.SSHTemporaryKeyPairName = fmt.Sprintf("packer_%s", uuid.TimeOrderedUUID())
}
// Validation
errs := c.Comm.Prepare(ctx)
if c.AlicloudSourceImage == "" {
errs = append(errs, errors.New("A source_image must be specified"))
}
if strings.TrimSpace(c.AlicloudSourceImage) != c.AlicloudSourceImage {
errs = append(errs, errors.New("The source_image can't include spaces"))
}
if c.InstanceType == "" {
errs = append(errs, errors.New("An alicloud_instance_type must be specified"))
}
if c.UserData != "" && c.UserDataFile != "" {
errs = append(errs, fmt.Errorf("Only one of user_data or user_data_file can be specified."))
} else if c.UserDataFile != "" {
if _, err := os.Stat(c.UserDataFile); err != nil {
errs = append(errs, fmt.Errorf("user_data_file not found: %s", c.UserDataFile))
}
}
return errs
}

View File

@ -1,178 +0,0 @@
package ecs
import (
"io/ioutil"
"os"
"testing"
"github.com/hashicorp/packer-plugin-sdk/communicator"
)
func testConfig() *RunConfig {
return &RunConfig{
AlicloudSourceImage: "alicloud_images",
InstanceType: "ecs.n1.tiny",
Comm: communicator.Config{
SSH: communicator.SSH{
SSHUsername: "alicloud",
},
},
}
}
func TestRunConfigPrepare(t *testing.T) {
c := testConfig()
err := c.Prepare(nil)
if len(err) > 0 {
t.Fatalf("err: %s", err)
}
}
func TestRunConfigPrepare_InstanceType(t *testing.T) {
c := testConfig()
c.InstanceType = ""
if err := c.Prepare(nil); len(err) != 1 {
t.Fatalf("err: %s", err)
}
}
func TestRunConfigPrepare_SourceECSImage(t *testing.T) {
c := testConfig()
c.AlicloudSourceImage = ""
if err := c.Prepare(nil); len(err) != 1 {
t.Fatalf("err: %s", err)
}
}
func TestRunConfigPrepare_SSHPort(t *testing.T) {
c := testConfig()
c.Comm.SSHPort = 0
if err := c.Prepare(nil); len(err) != 0 {
t.Fatalf("err: %s", err)
}
if c.Comm.SSHPort != 22 {
t.Fatalf("invalid value: %d", c.Comm.SSHPort)
}
c.Comm.SSHPort = 44
if err := c.Prepare(nil); len(err) != 0 {
t.Fatalf("err: %s", err)
}
if c.Comm.SSHPort != 44 {
t.Fatalf("invalid value: %d", c.Comm.SSHPort)
}
}
func TestRunConfigPrepare_UserData(t *testing.T) {
c := testConfig()
tf, err := ioutil.TempFile("", "packer")
if err != nil {
t.Fatalf("err: %s", err)
}
defer os.Remove(tf.Name())
defer tf.Close()
c.UserData = "foo"
c.UserDataFile = tf.Name()
if err := c.Prepare(nil); len(err) != 1 {
t.Fatalf("err: %s", err)
}
}
func TestRunConfigPrepare_UserDataFile(t *testing.T) {
c := testConfig()
if err := c.Prepare(nil); len(err) != 0 {
t.Fatalf("err: %s", err)
}
c.UserDataFile = "idontexistidontthink"
if err := c.Prepare(nil); len(err) != 1 {
t.Fatalf("err: %s", err)
}
tf, err := ioutil.TempFile("", "packer")
if err != nil {
t.Fatalf("err: %s", err)
}
defer os.Remove(tf.Name())
defer tf.Close()
c.UserDataFile = tf.Name()
if err := c.Prepare(nil); len(err) != 0 {
t.Fatalf("err: %s", err)
}
}
func TestRunConfigPrepare_TemporaryKeyPairName(t *testing.T) {
c := testConfig()
c.Comm.SSHTemporaryKeyPairName = ""
if err := c.Prepare(nil); len(err) != 0 {
t.Fatalf("err: %s", err)
}
if c.Comm.SSHTemporaryKeyPairName == "" {
t.Fatal("keypair name is empty")
}
c.Comm.SSHTemporaryKeyPairName = "ssh-key-123"
if err := c.Prepare(nil); len(err) != 0 {
t.Fatalf("err: %s", err)
}
if c.Comm.SSHTemporaryKeyPairName != "ssh-key-123" {
t.Fatal("keypair name does not match")
}
}
func TestRunConfigPrepare_SSHPrivateIp(t *testing.T) {
c := testConfig()
if err := c.Prepare(nil); len(err) != 0 {
t.Fatalf("err: %s", err)
}
if c.SSHPrivateIp != false {
t.Fatalf("invalid value, expected: %t, actul: %t", false, c.SSHPrivateIp)
}
c.SSHPrivateIp = true
if err := c.Prepare(nil); len(err) != 0 {
t.Fatalf("err: %s", err)
}
if c.SSHPrivateIp != true {
t.Fatalf("invalid value, expected: %t, actul: %t", true, c.SSHPrivateIp)
}
c.SSHPrivateIp = false
if err := c.Prepare(nil); len(err) != 0 {
t.Fatalf("err: %s", err)
}
if c.SSHPrivateIp != false {
t.Fatalf("invalid value, expected: %t, actul: %t", false, c.SSHPrivateIp)
}
}
func TestRunConfigPrepare_DisableStopInstance(t *testing.T) {
c := testConfig()
if err := c.Prepare(nil); len(err) != 0 {
t.Fatalf("err: %s", err)
}
if c.DisableStopInstance != false {
t.Fatalf("invalid value, expected: %t, actul: %t", false, c.DisableStopInstance)
}
c.DisableStopInstance = true
if err := c.Prepare(nil); len(err) != 0 {
t.Fatalf("err: %s", err)
}
if c.DisableStopInstance != true {
t.Fatalf("invalid value, expected: %t, actul: %t", true, c.DisableStopInstance)
}
c.DisableStopInstance = false
if err := c.Prepare(nil); len(err) != 0 {
t.Fatalf("err: %s", err)
}
if c.DisableStopInstance != false {
t.Fatalf("invalid value, expected: %t, actul: %t", false, c.DisableStopInstance)
}
}

View File

@ -1,23 +0,0 @@
package ecs
import (
"time"
"github.com/hashicorp/packer-plugin-sdk/multistep"
)
var (
// modified in tests
sshHostSleepDuration = time.Second
)
type alicloudSSHHelper interface {
}
// SSHHost returns a function that can be given to the SSH communicator
func SSHHost(e alicloudSSHHelper, private bool) func(multistep.StateBag) (string, error) {
return func(state multistep.StateBag) (string, error) {
ipAddress := state.Get("ipaddress").(string)
return ipAddress, nil
}
}

View File

@ -1,77 +0,0 @@
package ecs
import (
"context"
"fmt"
"github.com/aliyun/alibaba-cloud-sdk-go/sdk/responses"
"github.com/aliyun/alibaba-cloud-sdk-go/services/ecs"
"github.com/hashicorp/packer-plugin-sdk/multistep"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
)
type stepAttachKeyPair struct {
}
var attachKeyPairNotRetryErrors = []string{
"MissingParameter",
"DependencyViolation.WindowsInstance",
"InvalidKeyPairName.NotFound",
"InvalidRegionId.NotFound",
}
func (s *stepAttachKeyPair) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction {
ui := state.Get("ui").(packersdk.Ui)
client := state.Get("client").(*ClientWrapper)
config := state.Get("config").(*Config)
instance := state.Get("instance").(*ecs.Instance)
keyPairName := config.Comm.SSHKeyPairName
if keyPairName == "" {
return multistep.ActionContinue
}
_, err := client.WaitForExpected(&WaitForExpectArgs{
RequestFunc: func() (responses.AcsResponse, error) {
request := ecs.CreateAttachKeyPairRequest()
request.RegionId = config.AlicloudRegion
request.KeyPairName = keyPairName
request.InstanceIds = "[\"" + instance.InstanceId + "\"]"
return client.AttachKeyPair(request)
},
EvalFunc: client.EvalCouldRetryResponse(attachKeyPairNotRetryErrors, EvalNotRetryErrorType),
})
if err != nil {
return halt(state, err, fmt.Sprintf("Error attaching keypair %s to instance %s", keyPairName, instance.InstanceId))
}
ui.Message(fmt.Sprintf("Attach keypair %s to instance: %s", keyPairName, instance.InstanceId))
return multistep.ActionContinue
}
func (s *stepAttachKeyPair) Cleanup(state multistep.StateBag) {
client := state.Get("client").(*ClientWrapper)
config := state.Get("config").(*Config)
ui := state.Get("ui").(packersdk.Ui)
instance := state.Get("instance").(*ecs.Instance)
keyPairName := config.Comm.SSHKeyPairName
if keyPairName == "" {
return
}
detachKeyPairRequest := ecs.CreateDetachKeyPairRequest()
detachKeyPairRequest.RegionId = config.AlicloudRegion
detachKeyPairRequest.KeyPairName = keyPairName
detachKeyPairRequest.InstanceIds = fmt.Sprintf("[\"%s\"]", instance.InstanceId)
_, err := client.DetachKeyPair(detachKeyPairRequest)
if err != nil {
err := fmt.Errorf("Error Detaching keypair %s to instance %s : %s", keyPairName,
instance.InstanceId, err)
state.Put("error", err)
ui.Error(err.Error())
return
}
ui.Message(fmt.Sprintf("Detach keypair %s from instance: %s", keyPairName, instance.InstanceId))
}

View File

@ -1,57 +0,0 @@
package ecs
import (
"context"
"fmt"
"github.com/aliyun/alibaba-cloud-sdk-go/services/ecs"
"github.com/hashicorp/packer-plugin-sdk/multistep"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
)
type stepCheckAlicloudSourceImage struct {
SourceECSImageId string
}
func (s *stepCheckAlicloudSourceImage) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction {
client := state.Get("client").(*ClientWrapper)
config := state.Get("config").(*Config)
ui := state.Get("ui").(packersdk.Ui)
describeImagesRequest := ecs.CreateDescribeImagesRequest()
describeImagesRequest.RegionId = config.AlicloudRegion
describeImagesRequest.ImageId = config.AlicloudSourceImage
if config.AlicloudSkipImageValidation {
describeImagesRequest.ShowExpired = "true"
}
imagesResponse, err := client.DescribeImages(describeImagesRequest)
if err != nil {
return halt(state, err, "Error querying alicloud image")
}
images := imagesResponse.Images.Image
// Describe marketplace image
describeImagesRequest.ImageOwnerAlias = "marketplace"
marketImagesResponse, err := client.DescribeImages(describeImagesRequest)
if err != nil {
return halt(state, err, "Error querying alicloud marketplace image")
}
marketImages := marketImagesResponse.Images.Image
if len(marketImages) > 0 {
images = append(images, marketImages...)
}
if len(images) == 0 {
err := fmt.Errorf("No alicloud image was found matching filters: %v", config.AlicloudSourceImage)
return halt(state, err, "")
}
ui.Message(fmt.Sprintf("Found image ID: %s", images[0].ImageId))
state.Put("source_image", &images[0])
return multistep.ActionContinue
}
func (s *stepCheckAlicloudSourceImage) Cleanup(multistep.StateBag) {}

View File

@ -1,165 +0,0 @@
package ecs
import (
"context"
"fmt"
"github.com/aliyun/alibaba-cloud-sdk-go/sdk/errors"
"github.com/hashicorp/packer-plugin-sdk/uuid"
"github.com/aliyun/alibaba-cloud-sdk-go/sdk/responses"
"github.com/aliyun/alibaba-cloud-sdk-go/services/ecs"
"github.com/hashicorp/packer-plugin-sdk/multistep"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
)
type stepConfigAlicloudEIP struct {
AssociatePublicIpAddress bool
RegionId string
InternetChargeType string
InternetMaxBandwidthOut int
allocatedId string
SSHPrivateIp bool
}
var allocateEipAddressRetryErrors = []string{
"LastTokenProcessing",
}
func (s *stepConfigAlicloudEIP) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction {
client := state.Get("client").(*ClientWrapper)
ui := state.Get("ui").(packersdk.Ui)
instance := state.Get("instance").(*ecs.Instance)
if s.SSHPrivateIp {
ipaddress := instance.VpcAttributes.PrivateIpAddress.IpAddress
if len(ipaddress) == 0 {
ui.Say("Failed to get private ip of instance")
return multistep.ActionHalt
}
state.Put("ipaddress", ipaddress[0])
return multistep.ActionContinue
}
ui.Say("Allocating eip...")
allocateEipAddressRequest := s.buildAllocateEipAddressRequest(state)
allocateEipAddressResponse, err := client.WaitForExpected(&WaitForExpectArgs{
RequestFunc: func() (responses.AcsResponse, error) {
return client.AllocateEipAddress(allocateEipAddressRequest)
},
EvalFunc: client.EvalCouldRetryResponse(allocateEipAddressRetryErrors, EvalRetryErrorType),
})
if err != nil {
return halt(state, err, "Error allocating eip")
}
ipaddress := allocateEipAddressResponse.(*ecs.AllocateEipAddressResponse).EipAddress
ui.Message(fmt.Sprintf("Allocated eip: %s", ipaddress))
allocateId := allocateEipAddressResponse.(*ecs.AllocateEipAddressResponse).AllocationId
s.allocatedId = allocateId
err = s.waitForEipStatus(client, instance.RegionId, s.allocatedId, EipStatusAvailable)
if err != nil {
return halt(state, err, "Error wait eip available timeout")
}
associateEipAddressRequest := ecs.CreateAssociateEipAddressRequest()
associateEipAddressRequest.AllocationId = allocateId
associateEipAddressRequest.InstanceId = instance.InstanceId
if _, err := client.AssociateEipAddress(associateEipAddressRequest); err != nil {
e, ok := err.(errors.Error)
if !ok || e.ErrorCode() != "TaskConflict" {
return halt(state, err, "Error associating eip")
}
ui.Error(fmt.Sprintf("Error associate eip: %s", err))
}
err = s.waitForEipStatus(client, instance.RegionId, s.allocatedId, EipStatusInUse)
if err != nil {
return halt(state, err, "Error wait eip associated timeout")
}
state.Put("ipaddress", ipaddress)
return multistep.ActionContinue
}
func (s *stepConfigAlicloudEIP) Cleanup(state multistep.StateBag) {
if len(s.allocatedId) == 0 {
return
}
cleanUpMessage(state, "EIP")
client := state.Get("client").(*ClientWrapper)
instance := state.Get("instance").(*ecs.Instance)
ui := state.Get("ui").(packersdk.Ui)
unassociateEipAddressRequest := ecs.CreateUnassociateEipAddressRequest()
unassociateEipAddressRequest.AllocationId = s.allocatedId
unassociateEipAddressRequest.InstanceId = instance.InstanceId
if _, err := client.UnassociateEipAddress(unassociateEipAddressRequest); err != nil {
ui.Say(fmt.Sprintf("Failed to unassociate eip: %s", err))
}
if err := s.waitForEipStatus(client, instance.RegionId, s.allocatedId, EipStatusAvailable); err != nil {
ui.Say(fmt.Sprintf("Timeout while unassociating eip: %s", err))
}
releaseEipAddressRequest := ecs.CreateReleaseEipAddressRequest()
releaseEipAddressRequest.AllocationId = s.allocatedId
if _, err := client.ReleaseEipAddress(releaseEipAddressRequest); err != nil {
ui.Say(fmt.Sprintf("Failed to release eip: %s", err))
}
}
func (s *stepConfigAlicloudEIP) waitForEipStatus(client *ClientWrapper, regionId string, allocationId string, expectedStatus string) error {
describeEipAddressesRequest := ecs.CreateDescribeEipAddressesRequest()
describeEipAddressesRequest.RegionId = regionId
describeEipAddressesRequest.AllocationId = s.allocatedId
_, err := client.WaitForExpected(&WaitForExpectArgs{
RequestFunc: func() (responses.AcsResponse, error) {
response, err := client.DescribeEipAddresses(describeEipAddressesRequest)
if err == nil && len(response.EipAddresses.EipAddress) == 0 {
err = fmt.Errorf("eip allocated is not find")
}
return response, err
},
EvalFunc: func(response responses.AcsResponse, err error) WaitForExpectEvalResult {
if err != nil {
return WaitForExpectToRetry
}
eipAddressesResponse := response.(*ecs.DescribeEipAddressesResponse)
eipAddresses := eipAddressesResponse.EipAddresses.EipAddress
for _, eipAddress := range eipAddresses {
if eipAddress.Status == expectedStatus {
return WaitForExpectSuccess
}
}
return WaitForExpectToRetry
},
RetryTimes: shortRetryTimes,
})
return err
}
func (s *stepConfigAlicloudEIP) buildAllocateEipAddressRequest(state multistep.StateBag) *ecs.AllocateEipAddressRequest {
instance := state.Get("instance").(*ecs.Instance)
request := ecs.CreateAllocateEipAddressRequest()
request.ClientToken = uuid.TimeOrderedUUID()
request.RegionId = instance.RegionId
request.InternetChargeType = s.InternetChargeType
request.Bandwidth = string(convertNumber(s.InternetMaxBandwidthOut))
return request
}

View File

@ -1,132 +0,0 @@
package ecs
import (
"context"
"fmt"
"os"
"runtime"
"github.com/aliyun/alibaba-cloud-sdk-go/services/ecs"
"github.com/hashicorp/packer-plugin-sdk/communicator"
"github.com/hashicorp/packer-plugin-sdk/multistep"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
)
type stepConfigAlicloudKeyPair struct {
Debug bool
Comm *communicator.Config
DebugKeyPath string
RegionId string
keyName string
}
func (s *stepConfigAlicloudKeyPair) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction {
ui := state.Get("ui").(packersdk.Ui)
if s.Comm.SSHPrivateKeyFile != "" {
ui.Say("Using existing SSH private key")
privateKeyBytes, err := s.Comm.ReadSSHPrivateKeyFile()
if err != nil {
state.Put("error", err)
return multistep.ActionHalt
}
s.Comm.SSHPrivateKey = privateKeyBytes
return multistep.ActionContinue
}
if s.Comm.SSHAgentAuth && s.Comm.SSHKeyPairName == "" {
ui.Say("Using SSH Agent with key pair in source image")
return multistep.ActionContinue
}
if s.Comm.SSHAgentAuth && s.Comm.SSHKeyPairName != "" {
ui.Say(fmt.Sprintf("Using SSH Agent for existing key pair %s", s.Comm.SSHKeyPairName))
return multistep.ActionContinue
}
if s.Comm.SSHTemporaryKeyPairName == "" {
ui.Say("Not using temporary keypair")
s.Comm.SSHKeyPairName = ""
return multistep.ActionContinue
}
client := state.Get("client").(*ClientWrapper)
ui.Say(fmt.Sprintf("Creating temporary keypair: %s", s.Comm.SSHTemporaryKeyPairName))
createKeyPairRequest := ecs.CreateCreateKeyPairRequest()
createKeyPairRequest.RegionId = s.RegionId
createKeyPairRequest.KeyPairName = s.Comm.SSHTemporaryKeyPairName
keyResp, err := client.CreateKeyPair(createKeyPairRequest)
if err != nil {
return halt(state, err, "Error creating temporary keypair")
}
// Set the keyname so we know to delete it later
s.keyName = s.Comm.SSHTemporaryKeyPairName
// Set some state data for use in future steps
s.Comm.SSHKeyPairName = s.keyName
s.Comm.SSHPrivateKey = []byte(keyResp.PrivateKeyBody)
// If we're in debug mode, output the private key to the working
// directory.
if s.Debug {
ui.Message(fmt.Sprintf("Saving key for debug purposes: %s", s.DebugKeyPath))
f, err := os.Create(s.DebugKeyPath)
if err != nil {
state.Put("error", fmt.Errorf("Error saving debug key: %s", err))
return multistep.ActionHalt
}
defer f.Close()
// Write the key out
if _, err := f.Write([]byte(keyResp.PrivateKeyBody)); err != nil {
state.Put("error", fmt.Errorf("Error saving debug key: %s", err))
return multistep.ActionHalt
}
// Chmod it so that it is SSH ready
if runtime.GOOS != "windows" {
if err := f.Chmod(0600); err != nil {
state.Put("error", fmt.Errorf("Error setting permissions of debug key: %s", err))
return multistep.ActionHalt
}
}
}
return multistep.ActionContinue
}
func (s *stepConfigAlicloudKeyPair) Cleanup(state multistep.StateBag) {
// If no key name is set, then we never created it, so just return
// If we used an SSH private key file, do not go about deleting
// keypairs
if s.Comm.SSHPrivateKeyFile != "" || (s.Comm.SSHKeyPairName == "" && s.keyName == "") {
return
}
client := state.Get("client").(*ClientWrapper)
ui := state.Get("ui").(packersdk.Ui)
// Remove the keypair
ui.Say("Deleting temporary keypair...")
deleteKeyPairsRequest := ecs.CreateDeleteKeyPairsRequest()
deleteKeyPairsRequest.RegionId = s.RegionId
deleteKeyPairsRequest.KeyPairNames = fmt.Sprintf("[\"%s\"]", s.keyName)
_, err := client.DeleteKeyPairs(deleteKeyPairsRequest)
if err != nil {
ui.Error(fmt.Sprintf(
"Error cleaning up keypair. Please delete the key manually: %s", s.keyName))
}
// Also remove the physical key if we're debugging.
if s.Debug {
if err := os.Remove(s.DebugKeyPath); err != nil {
ui.Error(fmt.Sprintf(
"Error removing debug key '%s': %s", s.DebugKeyPath, err))
}
}
}

View File

@ -1,48 +0,0 @@
package ecs
import (
"context"
"fmt"
"github.com/aliyun/alibaba-cloud-sdk-go/services/ecs"
"github.com/hashicorp/packer-plugin-sdk/multistep"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
)
type stepConfigAlicloudPublicIP struct {
publicIPAddress string
RegionId string
SSHPrivateIp bool
}
func (s *stepConfigAlicloudPublicIP) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction {
client := state.Get("client").(*ClientWrapper)
ui := state.Get("ui").(packersdk.Ui)
instance := state.Get("instance").(*ecs.Instance)
if s.SSHPrivateIp {
ipaddress := instance.InnerIpAddress.IpAddress
if len(ipaddress) == 0 {
ui.Say("Failed to get private ip of instance")
return multistep.ActionHalt
}
state.Put("ipaddress", ipaddress[0])
return multistep.ActionContinue
}
allocatePublicIpAddressRequest := ecs.CreateAllocatePublicIpAddressRequest()
allocatePublicIpAddressRequest.InstanceId = instance.InstanceId
ipaddress, err := client.AllocatePublicIpAddress(allocatePublicIpAddressRequest)
if err != nil {
return halt(state, err, "Error allocating public ip")
}
s.publicIPAddress = ipaddress.IpAddress
ui.Say(fmt.Sprintf("Allocated public ip address %s.", ipaddress.IpAddress))
state.Put("ipaddress", ipaddress.IpAddress)
return multistep.ActionContinue
}
func (s *stepConfigAlicloudPublicIP) Cleanup(state multistep.StateBag) {
}

View File

@ -1,152 +0,0 @@
package ecs
import (
"context"
"fmt"
"github.com/aliyun/alibaba-cloud-sdk-go/sdk/responses"
"github.com/aliyun/alibaba-cloud-sdk-go/services/ecs"
"github.com/hashicorp/packer-plugin-sdk/multistep"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
"github.com/hashicorp/packer-plugin-sdk/uuid"
)
type stepConfigAlicloudSecurityGroup struct {
SecurityGroupId string
SecurityGroupName string
Description string
VpcId string
RegionId string
isCreate bool
}
var createSecurityGroupRetryErrors = []string{
"IdempotentProcessing",
}
var deleteSecurityGroupRetryErrors = []string{
"DependencyViolation",
}
func (s *stepConfigAlicloudSecurityGroup) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction {
client := state.Get("client").(*ClientWrapper)
ui := state.Get("ui").(packersdk.Ui)
networkType := state.Get("networktype").(InstanceNetWork)
if len(s.SecurityGroupId) != 0 {
describeSecurityGroupsRequest := ecs.CreateDescribeSecurityGroupsRequest()
describeSecurityGroupsRequest.RegionId = s.RegionId
describeSecurityGroupsRequest.SecurityGroupId = s.SecurityGroupId
if networkType == InstanceNetworkVpc {
vpcId := state.Get("vpcid").(string)
describeSecurityGroupsRequest.VpcId = vpcId
}
securityGroupsResponse, err := client.DescribeSecurityGroups(describeSecurityGroupsRequest)
if err != nil {
return halt(state, err, "Failed querying security group")
}
securityGroupItems := securityGroupsResponse.SecurityGroups.SecurityGroup
for _, securityGroupItem := range securityGroupItems {
if securityGroupItem.SecurityGroupId == s.SecurityGroupId {
state.Put("securitygroupid", s.SecurityGroupId)
s.isCreate = false
return multistep.ActionContinue
}
}
s.isCreate = false
err = fmt.Errorf("The specified security group {%s} doesn't exist.", s.SecurityGroupId)
return halt(state, err, "")
}
ui.Say("Creating security group...")
createSecurityGroupRequest := s.buildCreateSecurityGroupRequest(state)
securityGroupResponse, err := client.WaitForExpected(&WaitForExpectArgs{
RequestFunc: func() (responses.AcsResponse, error) {
return client.CreateSecurityGroup(createSecurityGroupRequest)
},
EvalFunc: client.EvalCouldRetryResponse(createSecurityGroupRetryErrors, EvalRetryErrorType),
})
if err != nil {
return halt(state, err, "Failed creating security group")
}
securityGroupId := securityGroupResponse.(*ecs.CreateSecurityGroupResponse).SecurityGroupId
ui.Message(fmt.Sprintf("Created security group: %s", securityGroupId))
state.Put("securitygroupid", securityGroupId)
s.isCreate = true
s.SecurityGroupId = securityGroupId
authorizeSecurityGroupEgressRequest := ecs.CreateAuthorizeSecurityGroupEgressRequest()
authorizeSecurityGroupEgressRequest.SecurityGroupId = securityGroupId
authorizeSecurityGroupEgressRequest.RegionId = s.RegionId
authorizeSecurityGroupEgressRequest.IpProtocol = IpProtocolAll
authorizeSecurityGroupEgressRequest.PortRange = DefaultPortRange
authorizeSecurityGroupEgressRequest.NicType = NicTypeInternet
authorizeSecurityGroupEgressRequest.DestCidrIp = DefaultCidrIp
if _, err := client.AuthorizeSecurityGroupEgress(authorizeSecurityGroupEgressRequest); err != nil {
return halt(state, err, "Failed authorizing security group")
}
authorizeSecurityGroupRequest := ecs.CreateAuthorizeSecurityGroupRequest()
authorizeSecurityGroupRequest.SecurityGroupId = securityGroupId
authorizeSecurityGroupRequest.RegionId = s.RegionId
authorizeSecurityGroupRequest.IpProtocol = IpProtocolAll
authorizeSecurityGroupRequest.PortRange = DefaultPortRange
authorizeSecurityGroupRequest.NicType = NicTypeInternet
authorizeSecurityGroupRequest.SourceCidrIp = DefaultCidrIp
if _, err := client.AuthorizeSecurityGroup(authorizeSecurityGroupRequest); err != nil {
return halt(state, err, "Failed authorizing security group")
}
return multistep.ActionContinue
}
func (s *stepConfigAlicloudSecurityGroup) Cleanup(state multistep.StateBag) {
if !s.isCreate {
return
}
cleanUpMessage(state, "security group")
client := state.Get("client").(*ClientWrapper)
ui := state.Get("ui").(packersdk.Ui)
_, err := client.WaitForExpected(&WaitForExpectArgs{
RequestFunc: func() (responses.AcsResponse, error) {
request := ecs.CreateDeleteSecurityGroupRequest()
request.RegionId = s.RegionId
request.SecurityGroupId = s.SecurityGroupId
return client.DeleteSecurityGroup(request)
},
EvalFunc: client.EvalCouldRetryResponse(deleteSecurityGroupRetryErrors, EvalRetryErrorType),
RetryTimes: shortRetryTimes,
})
if err != nil {
ui.Error(fmt.Sprintf("Failed to delete security group, it may still be around: %s", err))
}
}
func (s *stepConfigAlicloudSecurityGroup) buildCreateSecurityGroupRequest(state multistep.StateBag) *ecs.CreateSecurityGroupRequest {
networkType := state.Get("networktype").(InstanceNetWork)
request := ecs.CreateCreateSecurityGroupRequest()
request.ClientToken = uuid.TimeOrderedUUID()
request.RegionId = s.RegionId
request.SecurityGroupName = s.SecurityGroupName
if networkType == InstanceNetworkVpc {
vpcId := state.Get("vpcid").(string)
request.VpcId = vpcId
}
return request
}

View File

@ -1,148 +0,0 @@
package ecs
import (
"context"
errorsNew "errors"
"fmt"
"github.com/aliyun/alibaba-cloud-sdk-go/sdk/responses"
"github.com/aliyun/alibaba-cloud-sdk-go/services/ecs"
"github.com/hashicorp/packer-plugin-sdk/multistep"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
"github.com/hashicorp/packer-plugin-sdk/uuid"
)
type stepConfigAlicloudVPC struct {
VpcId string
CidrBlock string //192.168.0.0/16 or 172.16.0.0/16 (default)
VpcName string
isCreate bool
}
var createVpcRetryErrors = []string{
"TOKEN_PROCESSING",
}
var deleteVpcRetryErrors = []string{
"DependencyViolation.Instance",
"DependencyViolation.RouteEntry",
"DependencyViolation.VSwitch",
"DependencyViolation.SecurityGroup",
"Forbbiden",
"TaskConflict",
}
func (s *stepConfigAlicloudVPC) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction {
config := state.Get("config").(*Config)
client := state.Get("client").(*ClientWrapper)
ui := state.Get("ui").(packersdk.Ui)
if len(s.VpcId) != 0 {
describeVpcsRequest := ecs.CreateDescribeVpcsRequest()
describeVpcsRequest.VpcId = s.VpcId
describeVpcsRequest.RegionId = config.AlicloudRegion
vpcsResponse, err := client.DescribeVpcs(describeVpcsRequest)
if err != nil {
return halt(state, err, "Failed querying vpcs")
}
vpcs := vpcsResponse.Vpcs.Vpc
if len(vpcs) > 0 {
state.Put("vpcid", vpcs[0].VpcId)
s.isCreate = false
return multistep.ActionContinue
}
message := fmt.Sprintf("The specified vpc {%s} doesn't exist.", s.VpcId)
return halt(state, errorsNew.New(message), "")
}
ui.Say("Creating vpc...")
createVpcRequest := s.buildCreateVpcRequest(state)
createVpcResponse, err := client.WaitForExpected(&WaitForExpectArgs{
RequestFunc: func() (responses.AcsResponse, error) {
return client.CreateVpc(createVpcRequest)
},
EvalFunc: client.EvalCouldRetryResponse(createVpcRetryErrors, EvalRetryErrorType),
})
if err != nil {
return halt(state, err, "Failed creating vpc")
}
vpcId := createVpcResponse.(*ecs.CreateVpcResponse).VpcId
_, err = client.WaitForExpected(&WaitForExpectArgs{
RequestFunc: func() (responses.AcsResponse, error) {
request := ecs.CreateDescribeVpcsRequest()
request.RegionId = config.AlicloudRegion
request.VpcId = vpcId
return client.DescribeVpcs(request)
},
EvalFunc: func(response responses.AcsResponse, err error) WaitForExpectEvalResult {
if err != nil {
return WaitForExpectToRetry
}
vpcsResponse := response.(*ecs.DescribeVpcsResponse)
vpcs := vpcsResponse.Vpcs.Vpc
if len(vpcs) > 0 {
for _, vpc := range vpcs {
if vpc.Status == VpcStatusAvailable {
return WaitForExpectSuccess
}
}
}
return WaitForExpectToRetry
},
RetryTimes: shortRetryTimes,
})
if err != nil {
return halt(state, err, "Failed waiting for vpc to become available")
}
ui.Message(fmt.Sprintf("Created vpc: %s", vpcId))
state.Put("vpcid", vpcId)
s.isCreate = true
s.VpcId = vpcId
return multistep.ActionContinue
}
func (s *stepConfigAlicloudVPC) Cleanup(state multistep.StateBag) {
if !s.isCreate {
return
}
cleanUpMessage(state, "VPC")
client := state.Get("client").(*ClientWrapper)
ui := state.Get("ui").(packersdk.Ui)
_, err := client.WaitForExpected(&WaitForExpectArgs{
RequestFunc: func() (responses.AcsResponse, error) {
request := ecs.CreateDeleteVpcRequest()
request.VpcId = s.VpcId
return client.DeleteVpc(request)
},
EvalFunc: client.EvalCouldRetryResponse(deleteVpcRetryErrors, EvalRetryErrorType),
RetryTimes: shortRetryTimes,
})
if err != nil {
ui.Error(fmt.Sprintf("Error deleting vpc, it may still be around: %s", err))
}
}
func (s *stepConfigAlicloudVPC) buildCreateVpcRequest(state multistep.StateBag) *ecs.CreateVpcRequest {
config := state.Get("config").(*Config)
request := ecs.CreateCreateVpcRequest()
request.ClientToken = uuid.TimeOrderedUUID()
request.RegionId = config.AlicloudRegion
request.CidrBlock = s.CidrBlock
request.VpcName = s.VpcName
return request
}

View File

@ -1,207 +0,0 @@
package ecs
import (
"context"
"fmt"
"github.com/aliyun/alibaba-cloud-sdk-go/sdk/responses"
"github.com/aliyun/alibaba-cloud-sdk-go/services/ecs"
"github.com/hashicorp/packer-plugin-sdk/multistep"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
"github.com/hashicorp/packer-plugin-sdk/uuid"
)
type stepConfigAlicloudVSwitch struct {
VSwitchId string
ZoneId string
isCreate bool
CidrBlock string
VSwitchName string
}
var createVSwitchRetryErrors = []string{
"TOKEN_PROCESSING",
}
var deleteVSwitchRetryErrors = []string{
"IncorrectVSwitchStatus",
"DependencyViolation",
"DependencyViolation.HaVip",
"IncorrectRouteEntryStatus",
"TaskConflict",
}
func (s *stepConfigAlicloudVSwitch) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction {
client := state.Get("client").(*ClientWrapper)
ui := state.Get("ui").(packersdk.Ui)
vpcId := state.Get("vpcid").(string)
config := state.Get("config").(*Config)
if len(s.VSwitchId) != 0 {
describeVSwitchesRequest := ecs.CreateDescribeVSwitchesRequest()
describeVSwitchesRequest.VpcId = vpcId
describeVSwitchesRequest.VSwitchId = s.VSwitchId
describeVSwitchesRequest.ZoneId = s.ZoneId
vswitchesResponse, err := client.DescribeVSwitches(describeVSwitchesRequest)
if err != nil {
return halt(state, err, "Failed querying vswitch")
}
vswitch := vswitchesResponse.VSwitches.VSwitch
if len(vswitch) > 0 {
state.Put("vswitchid", vswitch[0].VSwitchId)
s.isCreate = false
return multistep.ActionContinue
}
s.isCreate = false
return halt(state, fmt.Errorf("The specified vswitch {%s} doesn't exist.", s.VSwitchId), "")
}
if s.ZoneId == "" {
describeZonesRequest := ecs.CreateDescribeZonesRequest()
describeZonesRequest.RegionId = config.AlicloudRegion
zonesResponse, err := client.DescribeZones(describeZonesRequest)
if err != nil {
return halt(state, err, "Query for available zones failed")
}
var instanceTypes []string
zones := zonesResponse.Zones.Zone
for _, zone := range zones {
isVSwitchSupported := false
for _, resourceType := range zone.AvailableResourceCreation.ResourceTypes {
if resourceType == "VSwitch" {
isVSwitchSupported = true
}
}
if isVSwitchSupported {
for _, instanceType := range zone.AvailableInstanceTypes.InstanceTypes {
if instanceType == config.InstanceType {
s.ZoneId = zone.ZoneId
break
}
instanceTypes = append(instanceTypes, instanceType)
}
}
}
if s.ZoneId == "" {
if len(instanceTypes) > 0 {
ui.Say(fmt.Sprintf("The instance type %s isn't available in this region."+
"\n You can either change the instance to one of following: %v \n"+
"or choose another region.", config.InstanceType, instanceTypes))
state.Put("error", fmt.Errorf("The instance type %s isn't available in this region."+
"\n You can either change the instance to one of following: %v \n"+
"or choose another region.", config.InstanceType, instanceTypes))
return multistep.ActionHalt
} else {
ui.Say(fmt.Sprintf("The instance type %s isn't available in this region."+
"\n You can change to other regions.", config.InstanceType))
state.Put("error", fmt.Errorf("The instance type %s isn't available in this region."+
"\n You can change to other regions.", config.InstanceType))
return multistep.ActionHalt
}
}
}
if config.CidrBlock == "" {
s.CidrBlock = DefaultCidrBlock //use the default CirdBlock
}
ui.Say("Creating vswitch...")
createVSwitchRequest := s.buildCreateVSwitchRequest(state)
createVSwitchResponse, err := client.WaitForExpected(&WaitForExpectArgs{
RequestFunc: func() (responses.AcsResponse, error) {
return client.CreateVSwitch(createVSwitchRequest)
},
EvalFunc: client.EvalCouldRetryResponse(createVSwitchRetryErrors, EvalRetryErrorType),
})
if err != nil {
return halt(state, err, "Error Creating vswitch")
}
vSwitchId := createVSwitchResponse.(*ecs.CreateVSwitchResponse).VSwitchId
describeVSwitchesRequest := ecs.CreateDescribeVSwitchesRequest()
describeVSwitchesRequest.VpcId = vpcId
describeVSwitchesRequest.VSwitchId = vSwitchId
_, err = client.WaitForExpected(&WaitForExpectArgs{
RequestFunc: func() (responses.AcsResponse, error) {
return client.DescribeVSwitches(describeVSwitchesRequest)
},
EvalFunc: func(response responses.AcsResponse, err error) WaitForExpectEvalResult {
if err != nil {
return WaitForExpectToRetry
}
vSwitchesResponse := response.(*ecs.DescribeVSwitchesResponse)
vSwitches := vSwitchesResponse.VSwitches.VSwitch
if len(vSwitches) > 0 {
for _, vSwitch := range vSwitches {
if vSwitch.Status == VSwitchStatusAvailable {
return WaitForExpectSuccess
}
}
}
return WaitForExpectToRetry
},
RetryTimes: shortRetryTimes,
})
if err != nil {
return halt(state, err, "Timeout waiting for vswitch to become available")
}
ui.Message(fmt.Sprintf("Created vswitch: %s", vSwitchId))
state.Put("vswitchid", vSwitchId)
s.isCreate = true
s.VSwitchId = vSwitchId
return multistep.ActionContinue
}
func (s *stepConfigAlicloudVSwitch) Cleanup(state multistep.StateBag) {
if !s.isCreate {
return
}
cleanUpMessage(state, "vSwitch")
client := state.Get("client").(*ClientWrapper)
ui := state.Get("ui").(packersdk.Ui)
_, err := client.WaitForExpected(&WaitForExpectArgs{
RequestFunc: func() (responses.AcsResponse, error) {
request := ecs.CreateDeleteVSwitchRequest()
request.VSwitchId = s.VSwitchId
return client.DeleteVSwitch(request)
},
EvalFunc: client.EvalCouldRetryResponse(deleteVSwitchRetryErrors, EvalRetryErrorType),
RetryTimes: shortRetryTimes,
})
if err != nil {
ui.Error(fmt.Sprintf("Error deleting vswitch, it may still be around: %s", err))
}
}
func (s *stepConfigAlicloudVSwitch) buildCreateVSwitchRequest(state multistep.StateBag) *ecs.CreateVSwitchRequest {
vpcId := state.Get("vpcid").(string)
request := ecs.CreateCreateVSwitchRequest()
request.ClientToken = uuid.TimeOrderedUUID()
request.CidrBlock = s.CidrBlock
request.ZoneId = s.ZoneId
request.VpcId = vpcId
request.VSwitchName = s.VSwitchName
return request
}

View File

@ -1,144 +0,0 @@
package ecs
import (
"context"
"fmt"
"time"
"github.com/hashicorp/packer-plugin-sdk/random"
"github.com/aliyun/alibaba-cloud-sdk-go/sdk/responses"
"github.com/aliyun/alibaba-cloud-sdk-go/services/ecs"
"github.com/hashicorp/packer-plugin-sdk/multistep"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
"github.com/hashicorp/packer-plugin-sdk/uuid"
)
type stepCreateAlicloudImage struct {
AlicloudImageIgnoreDataDisks bool
WaitSnapshotReadyTimeout int
image *ecs.Image
}
var createImageRetryErrors = []string{
"IdempotentProcessing",
}
func (s *stepCreateAlicloudImage) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction {
config := state.Get("config").(*Config)
client := state.Get("client").(*ClientWrapper)
ui := state.Get("ui").(packersdk.Ui)
tempImageName := config.AlicloudImageName
if config.ImageEncrypted.True() {
tempImageName = fmt.Sprintf("packer_%s", random.AlphaNum(7))
ui.Say(fmt.Sprintf("Creating temporary image for encryption: %s", tempImageName))
} else {
ui.Say(fmt.Sprintf("Creating image: %s", tempImageName))
}
createImageRequest := s.buildCreateImageRequest(state, tempImageName)
createImageResponse, err := client.WaitForExpected(&WaitForExpectArgs{
RequestFunc: func() (responses.AcsResponse, error) {
return client.CreateImage(createImageRequest)
},
EvalFunc: client.EvalCouldRetryResponse(createImageRetryErrors, EvalRetryErrorType),
})
if err != nil {
return halt(state, err, "Error creating image")
}
imageId := createImageResponse.(*ecs.CreateImageResponse).ImageId
imagesResponse, err := client.WaitForImageStatus(config.AlicloudRegion, imageId, ImageStatusAvailable, time.Duration(s.WaitSnapshotReadyTimeout)*time.Second)
// save image first for cleaning up if timeout
images := imagesResponse.(*ecs.DescribeImagesResponse).Images.Image
if len(images) == 0 {
return halt(state, err, "Unable to find created image")
}
s.image = &images[0]
if err != nil {
return halt(state, err, "Timeout waiting for image to be created")
}
var snapshotIds []string
for _, device := range images[0].DiskDeviceMappings.DiskDeviceMapping {
snapshotIds = append(snapshotIds, device.SnapshotId)
}
state.Put("alicloudimage", imageId)
state.Put("alicloudsnapshots", snapshotIds)
alicloudImages := make(map[string]string)
alicloudImages[config.AlicloudRegion] = images[0].ImageId
state.Put("alicloudimages", alicloudImages)
return multistep.ActionContinue
}
func (s *stepCreateAlicloudImage) Cleanup(state multistep.StateBag) {
if s.image == nil {
return
}
config := state.Get("config").(*Config)
encryptedSet := config.ImageEncrypted.True()
_, cancelled := state.GetOk(multistep.StateCancelled)
_, halted := state.GetOk(multistep.StateHalted)
if !cancelled && !halted && !encryptedSet {
return
}
client := state.Get("client").(*ClientWrapper)
ui := state.Get("ui").(packersdk.Ui)
if !cancelled && !halted && encryptedSet {
ui.Say(fmt.Sprintf("Deleting temporary image %s(%s) and related snapshots after finishing encryption...", s.image.ImageId, s.image.ImageName))
} else {
ui.Say("Deleting the image and related snapshots because of cancellation or error...")
}
deleteImageRequest := ecs.CreateDeleteImageRequest()
deleteImageRequest.RegionId = config.AlicloudRegion
deleteImageRequest.ImageId = s.image.ImageId
if _, err := client.DeleteImage(deleteImageRequest); err != nil {
ui.Error(fmt.Sprintf("Error deleting image, it may still be around: %s", err))
return
}
//Delete the snapshot of this image
for _, diskDevices := range s.image.DiskDeviceMappings.DiskDeviceMapping {
deleteSnapshotRequest := ecs.CreateDeleteSnapshotRequest()
deleteSnapshotRequest.SnapshotId = diskDevices.SnapshotId
if _, err := client.DeleteSnapshot(deleteSnapshotRequest); err != nil {
ui.Error(fmt.Sprintf("Error deleting snapshot, it may still be around: %s", err))
return
}
}
}
func (s *stepCreateAlicloudImage) buildCreateImageRequest(state multistep.StateBag, imageName string) *ecs.CreateImageRequest {
config := state.Get("config").(*Config)
request := ecs.CreateCreateImageRequest()
request.ClientToken = uuid.TimeOrderedUUID()
request.RegionId = config.AlicloudRegion
request.ImageName = imageName
request.ImageVersion = config.AlicloudImageVersion
request.Description = config.AlicloudImageDescription
if s.AlicloudImageIgnoreDataDisks {
snapshotId := state.Get("alicloudsnapshot").(string)
request.SnapshotId = snapshotId
} else {
instance := state.Get("instance").(*ecs.Instance)
request.InstanceId = instance.InstanceId
}
return request
}

View File

@ -1,210 +0,0 @@
package ecs
import (
"context"
"encoding/base64"
"fmt"
"io/ioutil"
"strconv"
"github.com/hashicorp/packer-plugin-sdk/uuid"
"github.com/aliyun/alibaba-cloud-sdk-go/sdk/requests"
"github.com/aliyun/alibaba-cloud-sdk-go/sdk/responses"
"github.com/aliyun/alibaba-cloud-sdk-go/services/ecs"
"github.com/hashicorp/packer-plugin-sdk/multistep"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
confighelper "github.com/hashicorp/packer-plugin-sdk/template/config"
)
type stepCreateAlicloudInstance struct {
IOOptimized confighelper.Trilean
InstanceType string
UserData string
UserDataFile string
instanceId string
RamRoleName string
RegionId string
InternetChargeType string
InternetMaxBandwidthOut int
InstanceName string
ZoneId string
instance *ecs.Instance
}
var createInstanceRetryErrors = []string{
"IdempotentProcessing",
}
var deleteInstanceRetryErrors = []string{
"IncorrectInstanceStatus.Initializing",
}
func (s *stepCreateAlicloudInstance) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction {
client := state.Get("client").(*ClientWrapper)
ui := state.Get("ui").(packersdk.Ui)
ui.Say("Creating instance...")
createInstanceRequest, err := s.buildCreateInstanceRequest(state)
if err != nil {
return halt(state, err, "")
}
createInstanceResponse, err := client.WaitForExpected(&WaitForExpectArgs{
RequestFunc: func() (responses.AcsResponse, error) {
return client.CreateInstance(createInstanceRequest)
},
EvalFunc: client.EvalCouldRetryResponse(createInstanceRetryErrors, EvalRetryErrorType),
})
if err != nil {
return halt(state, err, "Error creating instance")
}
instanceId := createInstanceResponse.(*ecs.CreateInstanceResponse).InstanceId
_, err = client.WaitForInstanceStatus(s.RegionId, instanceId, InstanceStatusStopped)
if err != nil {
return halt(state, err, "Error waiting create instance")
}
describeInstancesRequest := ecs.CreateDescribeInstancesRequest()
describeInstancesRequest.InstanceIds = fmt.Sprintf("[\"%s\"]", instanceId)
instances, err := client.DescribeInstances(describeInstancesRequest)
if err != nil {
return halt(state, err, "")
}
ui.Message(fmt.Sprintf("Created instance: %s", instanceId))
s.instance = &instances.Instances.Instance[0]
state.Put("instance", s.instance)
// instance_id is the generic term used so that users can have access to the
// instance id inside of the provisioners, used in step_provision.
state.Put("instance_id", instanceId)
return multistep.ActionContinue
}
func (s *stepCreateAlicloudInstance) Cleanup(state multistep.StateBag) {
if s.instance == nil {
return
}
cleanUpMessage(state, "instance")
client := state.Get("client").(*ClientWrapper)
ui := state.Get("ui").(packersdk.Ui)
_, err := client.WaitForExpected(&WaitForExpectArgs{
RequestFunc: func() (responses.AcsResponse, error) {
request := ecs.CreateDeleteInstanceRequest()
request.InstanceId = s.instance.InstanceId
request.Force = requests.NewBoolean(true)
return client.DeleteInstance(request)
},
EvalFunc: client.EvalCouldRetryResponse(deleteInstanceRetryErrors, EvalRetryErrorType),
RetryTimes: shortRetryTimes,
})
if err != nil {
ui.Say(fmt.Sprintf("Failed to clean up instance %s: %s", s.instance.InstanceId, err))
}
}
func (s *stepCreateAlicloudInstance) buildCreateInstanceRequest(state multistep.StateBag) (*ecs.CreateInstanceRequest, error) {
request := ecs.CreateCreateInstanceRequest()
request.ClientToken = uuid.TimeOrderedUUID()
request.RegionId = s.RegionId
request.InstanceType = s.InstanceType
request.InstanceName = s.InstanceName
request.RamRoleName = s.RamRoleName
request.ZoneId = s.ZoneId
sourceImage := state.Get("source_image").(*ecs.Image)
request.ImageId = sourceImage.ImageId
securityGroupId := state.Get("securitygroupid").(string)
request.SecurityGroupId = securityGroupId
networkType := state.Get("networktype").(InstanceNetWork)
if networkType == InstanceNetworkVpc {
vswitchId := state.Get("vswitchid").(string)
request.VSwitchId = vswitchId
userData, err := s.getUserData(state)
if err != nil {
return nil, err
}
request.UserData = userData
} else {
if s.InternetChargeType == "" {
s.InternetChargeType = "PayByTraffic"
}
if s.InternetMaxBandwidthOut == 0 {
s.InternetMaxBandwidthOut = 5
}
}
request.InternetChargeType = s.InternetChargeType
request.InternetMaxBandwidthOut = requests.Integer(convertNumber(s.InternetMaxBandwidthOut))
if s.IOOptimized.True() {
request.IoOptimized = IOOptimizedOptimized
} else if s.IOOptimized.False() {
request.IoOptimized = IOOptimizedNone
}
config := state.Get("config").(*Config)
password := config.Comm.SSHPassword
if password == "" && config.Comm.WinRMPassword != "" {
password = config.Comm.WinRMPassword
}
request.Password = password
systemDisk := config.AlicloudImageConfig.ECSSystemDiskMapping
request.SystemDiskDiskName = systemDisk.DiskName
request.SystemDiskCategory = systemDisk.DiskCategory
request.SystemDiskSize = requests.Integer(convertNumber(systemDisk.DiskSize))
request.SystemDiskDescription = systemDisk.Description
imageDisks := config.AlicloudImageConfig.ECSImagesDiskMappings
var dataDisks []ecs.CreateInstanceDataDisk
for _, imageDisk := range imageDisks {
var dataDisk ecs.CreateInstanceDataDisk
dataDisk.DiskName = imageDisk.DiskName
dataDisk.Category = imageDisk.DiskCategory
dataDisk.Size = string(convertNumber(imageDisk.DiskSize))
dataDisk.SnapshotId = imageDisk.SnapshotId
dataDisk.Description = imageDisk.Description
dataDisk.DeleteWithInstance = strconv.FormatBool(imageDisk.DeleteWithInstance)
dataDisk.Device = imageDisk.Device
if imageDisk.Encrypted != confighelper.TriUnset {
dataDisk.Encrypted = strconv.FormatBool(imageDisk.Encrypted.True())
}
dataDisks = append(dataDisks, dataDisk)
}
request.DataDisk = &dataDisks
return request, nil
}
func (s *stepCreateAlicloudInstance) getUserData(state multistep.StateBag) (string, error) {
userData := s.UserData
if s.UserDataFile != "" {
data, err := ioutil.ReadFile(s.UserDataFile)
if err != nil {
return "", err
}
userData = string(data)
}
if userData != "" {
userData = base64.StdEncoding.EncodeToString([]byte(userData))
}
return userData, nil
}

View File

@ -1,90 +0,0 @@
package ecs
import (
"context"
"fmt"
"time"
"github.com/aliyun/alibaba-cloud-sdk-go/sdk/errors"
"github.com/aliyun/alibaba-cloud-sdk-go/services/ecs"
"github.com/hashicorp/packer-plugin-sdk/multistep"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
)
type stepCreateAlicloudSnapshot struct {
snapshot *ecs.Snapshot
WaitSnapshotReadyTimeout int
}
func (s *stepCreateAlicloudSnapshot) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction {
config := state.Get("config").(*Config)
client := state.Get("client").(*ClientWrapper)
ui := state.Get("ui").(packersdk.Ui)
instance := state.Get("instance").(*ecs.Instance)
describeDisksRequest := ecs.CreateDescribeDisksRequest()
describeDisksRequest.RegionId = config.AlicloudRegion
describeDisksRequest.InstanceId = instance.InstanceId
describeDisksRequest.DiskType = DiskTypeSystem
disksResponse, err := client.DescribeDisks(describeDisksRequest)
if err != nil {
return halt(state, err, "Error describe disks")
}
disks := disksResponse.Disks.Disk
if len(disks) == 0 {
return halt(state, err, "Unable to find system disk of instance")
}
createSnapshotRequest := ecs.CreateCreateSnapshotRequest()
createSnapshotRequest.DiskId = disks[0].DiskId
snapshot, err := client.CreateSnapshot(createSnapshotRequest)
if err != nil {
return halt(state, err, "Error creating snapshot")
}
// Create the alicloud snapshot
ui.Say(fmt.Sprintf("Creating snapshot from system disk %s: %s", disks[0].DiskId, snapshot.SnapshotId))
snapshotsResponse, err := client.WaitForSnapshotStatus(config.AlicloudRegion, snapshot.SnapshotId, SnapshotStatusAccomplished, time.Duration(s.WaitSnapshotReadyTimeout)*time.Second)
if err != nil {
_, ok := err.(errors.Error)
if ok {
return halt(state, err, "Error querying created snapshot")
}
return halt(state, err, "Timeout waiting for snapshot to be created")
}
snapshots := snapshotsResponse.(*ecs.DescribeSnapshotsResponse).Snapshots.Snapshot
if len(snapshots) == 0 {
return halt(state, err, "Unable to find created snapshot")
}
s.snapshot = &snapshots[0]
state.Put("alicloudsnapshot", snapshot.SnapshotId)
return multistep.ActionContinue
}
func (s *stepCreateAlicloudSnapshot) Cleanup(state multistep.StateBag) {
if s.snapshot == nil {
return
}
_, cancelled := state.GetOk(multistep.StateCancelled)
_, halted := state.GetOk(multistep.StateHalted)
if !cancelled && !halted {
return
}
client := state.Get("client").(*ClientWrapper)
ui := state.Get("ui").(packersdk.Ui)
ui.Say("Deleting the snapshot because of cancellation or error...")
deleteSnapshotRequest := ecs.CreateDeleteSnapshotRequest()
deleteSnapshotRequest.SnapshotId = s.snapshot.SnapshotId
if _, err := client.DeleteSnapshot(deleteSnapshotRequest); err != nil {
ui.Error(fmt.Sprintf("Error deleting snapshot, it may still be around: %s", err))
return
}
}

View File

@ -1,65 +0,0 @@
package ecs
import (
"context"
"fmt"
"github.com/aliyun/alibaba-cloud-sdk-go/services/ecs"
"github.com/hashicorp/packer-plugin-sdk/multistep"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
)
type stepCreateTags struct {
Tags map[string]string
}
func (s *stepCreateTags) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction {
config := state.Get("config").(*Config)
client := state.Get("client").(*ClientWrapper)
ui := state.Get("ui").(packersdk.Ui)
imageId := state.Get("alicloudimage").(string)
snapshotIds := state.Get("alicloudsnapshots").([]string)
if len(s.Tags) == 0 {
return multistep.ActionContinue
}
ui.Say(fmt.Sprintf("Adding tags(%s) to image: %s", s.Tags, imageId))
var tags []ecs.AddTagsTag
for key, value := range s.Tags {
var tag ecs.AddTagsTag
tag.Key = key
tag.Value = value
tags = append(tags, tag)
}
addTagsRequest := ecs.CreateAddTagsRequest()
addTagsRequest.RegionId = config.AlicloudRegion
addTagsRequest.ResourceId = imageId
addTagsRequest.ResourceType = TagResourceImage
addTagsRequest.Tag = &tags
if _, err := client.AddTags(addTagsRequest); err != nil {
return halt(state, err, "Error Adding tags to image")
}
for _, snapshotId := range snapshotIds {
ui.Say(fmt.Sprintf("Adding tags(%s) to snapshot: %s", s.Tags, snapshotId))
addTagsRequest := ecs.CreateAddTagsRequest()
addTagsRequest.RegionId = config.AlicloudRegion
addTagsRequest.ResourceId = snapshotId
addTagsRequest.ResourceType = TagResourceSnapshot
addTagsRequest.Tag = &tags
if _, err := client.AddTags(addTagsRequest); err != nil {
return halt(state, err, "Error Adding tags to snapshot")
}
}
return multistep.ActionContinue
}
func (s *stepCreateTags) Cleanup(state multistep.StateBag) {
// Nothing need to do, tags will be cleaned when the resource is cleaned
}

View File

@ -1,101 +0,0 @@
package ecs
import (
"context"
"fmt"
"log"
"github.com/aliyun/alibaba-cloud-sdk-go/services/ecs"
"github.com/hashicorp/packer-plugin-sdk/multistep"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
)
type stepDeleteAlicloudImageSnapshots struct {
AlicloudImageForceDelete bool
AlicloudImageForceDeleteSnapshots bool
AlicloudImageName string
AlicloudImageDestinationRegions []string
AlicloudImageDestinationNames []string
}
func (s *stepDeleteAlicloudImageSnapshots) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction {
config := state.Get("config").(*Config)
// Check for force delete
if s.AlicloudImageForceDelete {
err := s.deleteImageAndSnapshots(state, s.AlicloudImageName, config.AlicloudRegion)
if err != nil {
return halt(state, err, "")
}
numberOfName := len(s.AlicloudImageDestinationNames)
if numberOfName == 0 {
return multistep.ActionContinue
}
for index, destinationRegion := range s.AlicloudImageDestinationRegions {
if destinationRegion == config.AlicloudRegion {
continue
}
if index < numberOfName {
err = s.deleteImageAndSnapshots(state, s.AlicloudImageDestinationNames[index], destinationRegion)
if err != nil {
return halt(state, err, "")
}
} else {
break
}
}
}
return multistep.ActionContinue
}
func (s *stepDeleteAlicloudImageSnapshots) deleteImageAndSnapshots(state multistep.StateBag, imageName string, region string) error {
client := state.Get("client").(*ClientWrapper)
ui := state.Get("ui").(packersdk.Ui)
describeImagesRequest := ecs.CreateDescribeImagesRequest()
describeImagesRequest.RegionId = region
describeImagesRequest.ImageName = imageName
describeImagesRequest.Status = ImageStatusQueried
imageResponse, _ := client.DescribeImages(describeImagesRequest)
images := imageResponse.Images.Image
if len(images) < 1 {
return nil
}
ui.Say(fmt.Sprintf("Deleting duplicated image and snapshot in %s: %s", region, imageName))
for _, image := range images {
if image.ImageOwnerAlias != ImageOwnerSelf {
log.Printf("You can not delete non-customized images: %s ", image.ImageId)
continue
}
deleteImageRequest := ecs.CreateDeleteImageRequest()
deleteImageRequest.RegionId = region
deleteImageRequest.ImageId = image.ImageId
if _, err := client.DeleteImage(deleteImageRequest); err != nil {
err := fmt.Errorf("Failed to delete image: %s", err)
return err
}
if s.AlicloudImageForceDeleteSnapshots {
for _, diskDevice := range image.DiskDeviceMappings.DiskDeviceMapping {
deleteSnapshotRequest := ecs.CreateDeleteSnapshotRequest()
deleteSnapshotRequest.SnapshotId = diskDevice.SnapshotId
if _, err := client.DeleteSnapshot(deleteSnapshotRequest); err != nil {
err := fmt.Errorf("Deleting ECS snapshot failed: %s", err)
return err
}
}
}
}
return nil
}
func (s *stepDeleteAlicloudImageSnapshots) Cleanup(state multistep.StateBag) {
}

View File

@ -1,87 +0,0 @@
package ecs
import (
"context"
"fmt"
"github.com/aliyun/alibaba-cloud-sdk-go/services/ecs"
"github.com/hashicorp/packer-plugin-sdk/multistep"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
)
type stepPreValidate struct {
AlicloudDestImageName string
ForceDelete bool
}
func (s *stepPreValidate) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction {
if err := s.validateRegions(state); err != nil {
return halt(state, err, "")
}
if err := s.validateDestImageName(state); err != nil {
return halt(state, err, "")
}
return multistep.ActionContinue
}
func (s *stepPreValidate) validateRegions(state multistep.StateBag) error {
ui := state.Get("ui").(packersdk.Ui)
config := state.Get("config").(*Config)
if config.AlicloudSkipValidation {
ui.Say("Skip region validation flag found, skipping prevalidating source region and copied regions.")
return nil
}
ui.Say("Prevalidating source region and copied regions...")
var errs *packersdk.MultiError
if err := config.ValidateRegion(config.AlicloudRegion); err != nil {
errs = packersdk.MultiErrorAppend(errs, err)
}
for _, region := range config.AlicloudImageDestinationRegions {
if err := config.ValidateRegion(region); err != nil {
errs = packersdk.MultiErrorAppend(errs, err)
}
}
if errs != nil && len(errs.Errors) > 0 {
return errs
}
return nil
}
func (s *stepPreValidate) validateDestImageName(state multistep.StateBag) error {
ui := state.Get("ui").(packersdk.Ui)
client := state.Get("client").(*ClientWrapper)
config := state.Get("config").(*Config)
if s.ForceDelete {
ui.Say("Force delete flag found, skipping prevalidating image name.")
return nil
}
ui.Say("Prevalidating image name...")
describeImagesRequest := ecs.CreateDescribeImagesRequest()
describeImagesRequest.RegionId = config.AlicloudRegion
describeImagesRequest.ImageName = s.AlicloudDestImageName
describeImagesRequest.Status = ImageStatusQueried
imagesResponse, err := client.DescribeImages(describeImagesRequest)
if err != nil {
return fmt.Errorf("Error querying alicloud image: %s", err)
}
images := imagesResponse.Images.Image
if len(images) > 0 {
return fmt.Errorf("Error: Image Name: '%s' is used by an existing alicloud image: %s", images[0].ImageName, images[0].ImageId)
}
return nil
}
func (s *stepPreValidate) Cleanup(multistep.StateBag) {}

View File

@ -1,106 +0,0 @@
package ecs
import (
"context"
"fmt"
"time"
"github.com/aliyun/alibaba-cloud-sdk-go/sdk/requests"
"github.com/aliyun/alibaba-cloud-sdk-go/services/ecs"
"github.com/hashicorp/packer-plugin-sdk/multistep"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
confighelper "github.com/hashicorp/packer-plugin-sdk/template/config"
)
type stepRegionCopyAlicloudImage struct {
AlicloudImageDestinationRegions []string
AlicloudImageDestinationNames []string
RegionId string
}
func (s *stepRegionCopyAlicloudImage) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction {
config := state.Get("config").(*Config)
if config.ImageEncrypted != confighelper.TriUnset {
s.AlicloudImageDestinationRegions = append(s.AlicloudImageDestinationRegions, s.RegionId)
s.AlicloudImageDestinationNames = append(s.AlicloudImageDestinationNames, config.AlicloudImageName)
}
if len(s.AlicloudImageDestinationRegions) == 0 {
return multistep.ActionContinue
}
client := state.Get("client").(*ClientWrapper)
ui := state.Get("ui").(packersdk.Ui)
srcImageId := state.Get("alicloudimage").(string)
alicloudImages := state.Get("alicloudimages").(map[string]string)
numberOfName := len(s.AlicloudImageDestinationNames)
ui.Say(fmt.Sprintf("Coping image %s from %s...", srcImageId, s.RegionId))
for index, destinationRegion := range s.AlicloudImageDestinationRegions {
if destinationRegion == s.RegionId && config.ImageEncrypted == confighelper.TriUnset {
continue
}
ecsImageName := ""
if numberOfName > 0 && index < numberOfName {
ecsImageName = s.AlicloudImageDestinationNames[index]
}
copyImageRequest := ecs.CreateCopyImageRequest()
copyImageRequest.RegionId = s.RegionId
copyImageRequest.ImageId = srcImageId
copyImageRequest.DestinationRegionId = destinationRegion
copyImageRequest.DestinationImageName = ecsImageName
if config.ImageEncrypted != confighelper.TriUnset {
copyImageRequest.Encrypted = requests.NewBoolean(config.ImageEncrypted.True())
}
imageResponse, err := client.CopyImage(copyImageRequest)
if err != nil {
return halt(state, err, "Error copying images")
}
alicloudImages[destinationRegion] = imageResponse.ImageId
ui.Message(fmt.Sprintf("Copy image from %s(%s) to %s(%s)", s.RegionId, srcImageId, destinationRegion, imageResponse.ImageId))
}
if config.ImageEncrypted != confighelper.TriUnset {
if _, err := client.WaitForImageStatus(s.RegionId, alicloudImages[s.RegionId], ImageStatusAvailable, time.Duration(ALICLOUD_DEFAULT_LONG_TIMEOUT)*time.Second); err != nil {
return halt(state, err, fmt.Sprintf("Timeout waiting image %s finish copying", alicloudImages[s.RegionId]))
}
}
return multistep.ActionContinue
}
func (s *stepRegionCopyAlicloudImage) Cleanup(state multistep.StateBag) {
_, cancelled := state.GetOk(multistep.StateCancelled)
_, halted := state.GetOk(multistep.StateHalted)
if !cancelled && !halted {
return
}
ui := state.Get("ui").(packersdk.Ui)
ui.Say(fmt.Sprintf("Stopping copy image because cancellation or error..."))
client := state.Get("client").(*ClientWrapper)
alicloudImages := state.Get("alicloudimages").(map[string]string)
srcImageId := state.Get("alicloudimage").(string)
for copiedRegionId, copiedImageId := range alicloudImages {
if copiedImageId == srcImageId {
continue
}
cancelCopyImageRequest := ecs.CreateCancelCopyImageRequest()
cancelCopyImageRequest.RegionId = copiedRegionId
cancelCopyImageRequest.ImageId = copiedImageId
if _, err := client.CancelCopyImage(cancelCopyImageRequest); err != nil {
ui.Error(fmt.Sprintf("Error cancelling copy image: %v", err))
}
}
}

View File

@ -1,72 +0,0 @@
package ecs
import (
"context"
"fmt"
"github.com/aliyun/alibaba-cloud-sdk-go/sdk/requests"
"github.com/aliyun/alibaba-cloud-sdk-go/services/ecs"
"github.com/hashicorp/packer-plugin-sdk/multistep"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
)
type stepRunAlicloudInstance struct {
}
func (s *stepRunAlicloudInstance) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction {
client := state.Get("client").(*ClientWrapper)
ui := state.Get("ui").(packersdk.Ui)
instance := state.Get("instance").(*ecs.Instance)
startInstanceRequest := ecs.CreateStartInstanceRequest()
startInstanceRequest.InstanceId = instance.InstanceId
if _, err := client.StartInstance(startInstanceRequest); err != nil {
return halt(state, err, "Error starting instance")
}
ui.Say(fmt.Sprintf("Starting instance: %s", instance.InstanceId))
_, err := client.WaitForInstanceStatus(instance.RegionId, instance.InstanceId, InstanceStatusRunning)
if err != nil {
return halt(state, err, "Timeout waiting for instance to start")
}
return multistep.ActionContinue
}
func (s *stepRunAlicloudInstance) Cleanup(state multistep.StateBag) {
_, cancelled := state.GetOk(multistep.StateCancelled)
_, halted := state.GetOk(multistep.StateHalted)
if !cancelled && !halted {
return
}
ui := state.Get("ui").(packersdk.Ui)
client := state.Get("client").(*ClientWrapper)
instance := state.Get("instance").(*ecs.Instance)
describeInstancesRequest := ecs.CreateDescribeInstancesRequest()
describeInstancesRequest.InstanceIds = fmt.Sprintf("[\"%s\"]", instance.InstanceId)
instancesResponse, _ := client.DescribeInstances(describeInstancesRequest)
if len(instancesResponse.Instances.Instance) == 0 {
return
}
instanceAttribute := instancesResponse.Instances.Instance[0]
if instanceAttribute.Status == InstanceStatusStarting || instanceAttribute.Status == InstanceStatusRunning {
stopInstanceRequest := ecs.CreateStopInstanceRequest()
stopInstanceRequest.InstanceId = instance.InstanceId
stopInstanceRequest.ForceStop = requests.NewBoolean(true)
if _, err := client.StopInstance(stopInstanceRequest); err != nil {
ui.Say(fmt.Sprintf("Error stopping instance %s, it may still be around %s", instance.InstanceId, err))
return
}
_, err := client.WaitForInstanceStatus(instance.RegionId, instance.InstanceId, InstanceStatusStopped)
if err != nil {
ui.Say(fmt.Sprintf("Error stopping instance %s, it may still be around %s", instance.InstanceId, err))
}
}
}

View File

@ -1,60 +0,0 @@
package ecs
import (
"context"
"fmt"
"github.com/aliyun/alibaba-cloud-sdk-go/services/ecs"
"github.com/hashicorp/packer-plugin-sdk/multistep"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
)
type stepShareAlicloudImage struct {
AlicloudImageShareAccounts []string
AlicloudImageUNShareAccounts []string
RegionId string
}
func (s *stepShareAlicloudImage) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction {
client := state.Get("client").(*ClientWrapper)
alicloudImages := state.Get("alicloudimages").(map[string]string)
for regionId, imageId := range alicloudImages {
modifyImageShareRequest := ecs.CreateModifyImageSharePermissionRequest()
modifyImageShareRequest.RegionId = regionId
modifyImageShareRequest.ImageId = imageId
modifyImageShareRequest.AddAccount = &s.AlicloudImageShareAccounts
modifyImageShareRequest.RemoveAccount = &s.AlicloudImageUNShareAccounts
if _, err := client.ModifyImageSharePermission(modifyImageShareRequest); err != nil {
return halt(state, err, "Failed modifying image share permissions")
}
}
return multistep.ActionContinue
}
func (s *stepShareAlicloudImage) Cleanup(state multistep.StateBag) {
_, cancelled := state.GetOk(multistep.StateCancelled)
_, halted := state.GetOk(multistep.StateHalted)
if !cancelled && !halted {
return
}
ui := state.Get("ui").(packersdk.Ui)
client := state.Get("client").(*ClientWrapper)
alicloudImages := state.Get("alicloudimages").(map[string]string)
ui.Say("Restoring image share permission because cancellations or error...")
for regionId, imageId := range alicloudImages {
modifyImageShareRequest := ecs.CreateModifyImageSharePermissionRequest()
modifyImageShareRequest.RegionId = regionId
modifyImageShareRequest.ImageId = imageId
modifyImageShareRequest.AddAccount = &s.AlicloudImageUNShareAccounts
modifyImageShareRequest.RemoveAccount = &s.AlicloudImageShareAccounts
if _, err := client.ModifyImageSharePermission(modifyImageShareRequest); err != nil {
ui.Say(fmt.Sprintf("Restoring image share permission failed: %s", err))
}
}
}

View File

@ -1,48 +0,0 @@
package ecs
import (
"context"
"fmt"
"strconv"
"github.com/aliyun/alibaba-cloud-sdk-go/sdk/requests"
"github.com/aliyun/alibaba-cloud-sdk-go/services/ecs"
"github.com/hashicorp/packer-plugin-sdk/multistep"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
)
type stepStopAlicloudInstance struct {
ForceStop bool
DisableStop bool
}
func (s *stepStopAlicloudInstance) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction {
client := state.Get("client").(*ClientWrapper)
instance := state.Get("instance").(*ecs.Instance)
ui := state.Get("ui").(packersdk.Ui)
if !s.DisableStop {
ui.Say(fmt.Sprintf("Stopping instance: %s", instance.InstanceId))
stopInstanceRequest := ecs.CreateStopInstanceRequest()
stopInstanceRequest.InstanceId = instance.InstanceId
stopInstanceRequest.ForceStop = requests.Boolean(strconv.FormatBool(s.ForceStop))
if _, err := client.StopInstance(stopInstanceRequest); err != nil {
return halt(state, err, "Error stopping alicloud instance")
}
}
ui.Say(fmt.Sprintf("Waiting instance stopped: %s", instance.InstanceId))
_, err := client.WaitForInstanceStatus(instance.RegionId, instance.InstanceId, InstanceStatusStopped)
if err != nil {
return halt(state, err, "Error waiting for alicloud instance to stop")
}
return multistep.ActionContinue
}
func (s *stepStopAlicloudInstance) Cleanup(multistep.StateBag) {
// No cleanup...
}

View File

@ -1,25 +0,0 @@
{
"variables": {
"access_key": "{{env `ALICLOUD_ACCESS_KEY`}}",
"secret_key": "{{env `ALICLOUD_SECRET_KEY`}}"
},
"builders": [{
"type":"alicloud-ecs",
"access_key":"{{user `access_key`}}",
"secret_key":"{{user `secret_key`}}",
"region":"cn-beijing",
"image_name":"packer_basic",
"source_image":"centos_7_03_64_20G_alibase_20170818.vhd",
"ssh_username":"root",
"instance_type":"ecs.n1.tiny",
"internet_charge_type":"PayByTraffic",
"io_optimized":"true"
}],
"provisioners": [{
"type": "shell",
"inline": [
"sleep 30",
"yum install redis.x86_64 -y"
]
}]
}

View File

@ -1,27 +0,0 @@
{
"variables": {
"access_key": "{{env `ALICLOUD_ACCESS_KEY`}}",
"secret_key": "{{env `ALICLOUD_SECRET_KEY`}}"
},
"builders": [{
"type":"alicloud-ecs",
"access_key":"{{user `access_key`}}",
"secret_key":"{{user `secret_key`}}",
"region":"cn-beijing",
"image_name":"packer_test",
"source_image":"winsvr_64_dtcC_1809_en-us_40G_alibase_20190318.vhd",
"instance_type":"ecs.n1.tiny",
"io_optimized":"true",
"internet_charge_type":"PayByTraffic",
"image_force_delete":"true",
"communicator": "winrm",
"winrm_port": 5985,
"winrm_username": "Administrator",
"winrm_password": "Test1234",
"user_data_file": "examples/alicloud/basic/winrm_enable_userdata.ps1"
}],
"provisioners": [{
"type": "powershell",
"inline": ["dir c:\\"]
}]
}

View File

@ -1,37 +0,0 @@
{
"variables": {
"access_key": "{{env `ALICLOUD_ACCESS_KEY`}}",
"secret_key": "{{env `ALICLOUD_SECRET_KEY`}}"
},
"builders": [{
"type":"alicloud-ecs",
"access_key":"{{user `access_key`}}",
"secret_key":"{{user `secret_key`}}",
"region":"cn-beijing",
"image_name":"packer_with_data_disk",
"source_image":"centos_7_03_64_20G_alibase_20170818.vhd",
"ssh_username":"root",
"instance_type":"ecs.n1.tiny",
"internet_charge_type":"PayByTraffic",
"io_optimized":"true",
"image_disk_mappings":[
{
"disk_name":"data1",
"disk_size":20,
"disk_delete_with_instance": true
},{
"disk_name":"data2",
"disk_size":20,
"disk_device":"/dev/xvdz",
"disk_delete_with_instance": true
}
]
}],
"provisioners": [{
"type": "shell",
"inline": [
"sleep 30",
"yum install redis.x86_64 -y"
]
}]
}

View File

@ -1,26 +0,0 @@
#powershell
write-output "Running User Data Script"
write-host "(host) Running User Data Script"
Set-ExecutionPolicy Unrestricted -Scope LocalMachine -Force -ErrorAction Ignore
# Don't set this before Set-ExecutionPolicy as it throws an error
$ErrorActionPreference = "stop"
# Remove HTTP listener
Remove-Item -Path WSMan:\Localhost\listener\listener* -Recurse
# WinRM
write-output "Setting up WinRM"
write-host "(host) setting up WinRM"
cmd.exe /c winrm quickconfig -q
cmd.exe /c winrm quickconfig '-transport:http'
cmd.exe /c winrm set "winrm/config" '@{MaxTimeoutms="1800000"}'
cmd.exe /c winrm set "winrm/config/winrs" '@{MaxMemoryPerShellMB="10240"}'
cmd.exe /c winrm set "winrm/config/service" '@{AllowUnencrypted="true"}'
cmd.exe /c winrm set "winrm/config/client" '@{AllowUnencrypted="true"}'
cmd.exe /c winrm set "winrm/config/service/auth" '@{Basic="true"}'
cmd.exe /c winrm set "winrm/config/client/auth" '@{Basic="true"}'
cmd.exe /c winrm set "winrm/config/service/auth" '@{CredSSP="true"}'
cmd.exe /c winrm set "winrm/config/listener?Address=*+Transport=HTTP" '@{Port="5985"}'
cmd.exe /c netsh advfirewall firewall set rule group="remote administration" new enable=yes
cmd.exe /c netsh firewall add portopening TCP 5985 "Port 5985"
cmd.exe /c net stop winrm
cmd.exe /c sc config winrm start= auto
cmd.exe /c net start winrm

View File

@ -1,34 +0,0 @@
{
"variables": {
"access_key": "{{env `ALICLOUD_ACCESS_KEY`}}",
"secret_key": "{{env `ALICLOUD_SECRET_KEY`}}"
},
"builders": [{
"type":"alicloud-ecs",
"access_key":"{{user `access_key`}}",
"secret_key":"{{user `secret_key`}}",
"region":"cn-beijing",
"image_name":"packer_chef2",
"source_image":"ubuntu_18_04_64_20G_alibase_20190223.vhd",
"ssh_username":"root",
"instance_type":"ecs.n1.medium",
"io_optimized":"true",
"image_force_delete":"true",
"internet_charge_type":"PayByTraffic",
"ssh_password":"Test1234",
"user_data_file":"examples/alicloud/chef/user_data.sh"
}],
"provisioners": [{
"type": "file",
"source": "examples/alicloud/chef/chef.sh",
"destination": "/root/"
},{
"type": "shell",
"inline": [
"cd /root/",
"chmod 755 chef.sh",
"./chef.sh",
"chef-server-ctl reconfigure"
]
}]
}

View File

@ -1,47 +0,0 @@
#!/bin/sh
#if the related deb pkg not found, please replace with it other available repository url
HOSTNAME=`ifconfig eth1|grep 'inet addr'|cut -d ":" -f2|cut -d " " -f1`
if [ not $HOSTNAME ] ; then
HOSTNAME=`ifconfig eth0|grep 'inet addr'|cut -d ":" -f2|cut -d " " -f1`
fi
CHEF_SERVER_URL='http://dubbo.oss-cn-shenzhen.aliyuncs.com/chef-server-core_12.8.0-1_amd64.deb'
CHEF_CONSOLE_URL='http://dubbo.oss-cn-shenzhen.aliyuncs.com/chef-manage_2.4.3-1_amd64.deb'
CHEF_SERVER_ADMIN='admin'
CHEF_SERVER_ADMIN_PASSWORD='vmADMIN123'
ORGANIZATION='aliyun'
ORGANIZATION_FULL_NAME='Aliyun, Inc'
#specify hostname
hostname $HOSTNAME
mkdir ~/.pemfile
#install chef server
wget $CHEF_SERVER_URL
sudo dpkg -i chef-server-core_*.deb
sudo chef-server-ctl reconfigure
#create admin user
sudo chef-server-ctl user-create $CHEF_SERVER_ADMIN $CHEF_SERVER_ADMIN $CHEF_SERVER_ADMIN 641002259@qq.com $CHEF_SERVER_ADMIN_PASSWORD -f ~/.pemfile/admin.pem
#create aliyun organization
sudo chef-server-ctl org-create $ORGANIZATION $ORGANIZATION_FULL_NAME --association_user $CHEF_SERVER_ADMIN -f ~/.pemfile/aliyun-validator.pem
#install chef management console
wget $CHEF_CONSOLE_URL
sudo dpkg -i chef-manage_*.deb
sudo chef-server-ctl reconfigure
type expect >/dev/null 2>&1 || { echo >&2 "Install Expect..."; apt-get -y install expect; }
echo "spawn sudo chef-manage-ctl reconfigure" >> chef-manage-confirm.exp
echo "expect \"*Press any key to continue\"" >> chef-manage-confirm.exp
echo "send \"a\\\n\"" >> chef-manage-confirm.exp
echo "expect \".*chef-manage 2.4.3 license: \\\"Chef-MLSA\\\".*\"" >> chef-manage-confirm.exp
echo "send \"q\"" >> chef-manage-confirm.exp
echo "expect \".*Type 'yes' to accept the software license agreement, or anything else to cancel.\"" >> chef-manage-confirm.exp
echo "send \"yes\\\n\"" >> chef-manage-confirm.exp
echo "interact" >> chef-manage-confirm.exp
expect chef-manage-confirm.exp
rm -f chef-manage-confirm.exp
#clean
rm -rf chef-manage_2.4.3-1_amd64.deb
rm -rf chef-server-core_12.8.0-1_amd64.deb

View File

@ -1,6 +0,0 @@
HOSTNAME=`ifconfig eth1|grep 'inet addr'|cut -d ":" -f2|cut -d " " -f1`
if [ not $HOSTNAME ] ; then
HOSTNAME=`ifconfig eth0|grep 'inet addr'|cut -d ":" -f2|cut -d " " -f1`
fi
hostname $HOSTNAME
chef-server-ctl reconfigure

View File

@ -1,32 +0,0 @@
{
"variables": {
"access_key": "{{env `ALICLOUD_ACCESS_KEY`}}",
"secret_key": "{{env `ALICLOUD_SECRET_KEY`}}"
},
"builders": [{
"type":"alicloud-ecs",
"access_key":"{{user `access_key`}}",
"secret_key":"{{user `secret_key`}}",
"region":"cn-beijing",
"image_name":"packer_jenkins",
"source_image":"ubuntu_18_04_64_20G_alibase_20190223.vhd",
"ssh_username":"root",
"instance_type":"ecs.n1.medium",
"io_optimized":"true",
"internet_charge_type":"PayByTraffic",
"image_force_delete":"true",
"ssh_password":"Test12345"
}],
"provisioners": [{
"type": "file",
"source": "examples/alicloud/jenkins/jenkins.sh",
"destination": "/root/"
},{
"type": "shell",
"inline": [
"cd /root/",
"chmod 755 jenkins.sh",
"./jenkins.sh"
]
}]
}

View File

@ -1,48 +0,0 @@
#!/bin/sh
JENKINS_URL='http://mirrors.jenkins.io/war-stable/2.32.2/jenkins.war'
TOMCAT_VERSION='7.0.77'
TOMCAT_NAME="apache-tomcat-$TOMCAT_VERSION"
TOMCAT_PACKAGE="$TOMCAT_NAME.tar.gz"
TOMCAT_URL="http://mirror.bit.edu.cn/apache/tomcat/tomcat-7/v$TOMCAT_VERSION/bin/$TOMCAT_PACKAGE"
TOMCAT_PATH="/opt/$TOMCAT_NAME"
#install jdk
if grep -Eqi "Ubuntu|Debian|Raspbian" /etc/issue || grep -Eq "Ubuntu|Debian|Raspbian" /etc/*-release; then
sudo apt-get update -y
sudo apt-get install -y openjdk-7-jdk
elif grep -Eqi "CentOS|Fedora|Red Hat Enterprise Linux Server" /etc/issue || grep -Eq "CentOS|Fedora|Red Hat Enterprise Linux Server" /etc/*-release; then
sudo yum update -y
sudo yum install -y openjdk-7-jdk
else
echo "Unknown OS type."
fi
#install jenkins server
mkdir ~/work
cd ~/work
#install tomcat
wget $TOMCAT_URL
tar -zxvf $TOMCAT_PACKAGE
mv $TOMCAT_NAME /opt
#install
wget $JENKINS_URL
mv jenkins.war $TOMCAT_PATH/webapps/
#set environment
echo "TOMCAT_PATH=\"$TOMCAT_PATH\"">>/etc/profile
echo "JENKINS_HOME=\"$TOMCAT_PATH/webapps/jenkins\"">>/etc/profile
echo PATH="\"\$PATH:\$TOMCAT_PATH:\$JENKINS_HOME\"">>/etc/profile
. /etc/profile
#start tomcat & jenkins
$TOMCAT_PATH/bin/startup.sh
#set start on boot
sed -i "/#!\/bin\/sh/a$TOMCAT_PATH/bin/startup.sh" /etc/rc.local
#clean
rm -rf ~/work

View File

@ -1,59 +0,0 @@
{"variables": {
"box_basename": "centos-6.8",
"build_timestamp": "{{isotime \"20060102150405\"}}",
"cpus": "1",
"disk_size": "4096",
"git_revision": "__unknown_git_revision__",
"headless": "",
"http_proxy": "{{env `http_proxy`}}",
"https_proxy": "{{env `https_proxy`}}",
"iso_checksum": "md5:0ca12fe5f28c2ceed4f4084b41ff8a0b",
"iso_name": "CentOS-6.8-x86_64-minimal.iso",
"ks_path": "centos-6.8/ks.cfg",
"memory": "512",
"metadata": "floppy/dummy_metadata.json",
"mirror": "http://mirrors.aliyun.com/centos",
"mirror_directory": "6.8/isos/x86_64",
"name": "centos-6.8",
"no_proxy": "{{env `no_proxy`}}",
"template": "centos-6.8-x86_64",
"version": "2.1.TIMESTAMP"
},
"builders":[
{
"boot_command": [
"<tab> text ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/{{user `ks_path`}}<enter><wait>"
],
"boot_wait": "10s",
"disk_size": "{{user `disk_size`}}",
"headless": "{{ user `headless` }}",
"http_directory": "http",
"iso_checksum": "{{user `iso_checksum`}}",
"iso_checksum_type": "{{user `iso_checksum_type`}}",
"iso_url": "{{user `mirror`}}/{{user `mirror_directory`}}/{{user `iso_name`}}",
"output_directory": "packer-{{user `template`}}-qemu",
"shutdown_command": "echo 'vagrant'|sudo -S /sbin/halt -h -p",
"ssh_password": "vagrant",
"ssh_port": 22,
"ssh_username": "root",
"ssh_timeout": "10000s",
"type": "qemu",
"vm_name": "{{ user `template` }}.raw",
"net_device": "virtio-net",
"disk_interface": "virtio",
"format": "raw"
}
],
"post-processors":[
{
"type":"alicloud-import",
"oss_bucket_name": "packer",
"image_name": "packer_import",
"image_os_type": "linux",
"image_platform": "CentOS",
"image_architecture": "x86_64",
"image_system_size": "40",
"region":"cn-beijing"
}
]
}

View File

@ -1,69 +0,0 @@
install
cdrom
lang en_US.UTF-8
keyboard us
network --bootproto=dhcp
rootpw vagrant
firewall --disabled
selinux --permissive
timezone UTC
unsupported_hardware
bootloader --location=mbr
text
skipx
zerombr
clearpart --all --initlabel
autopart
auth --enableshadow --passalgo=sha512 --kickstart
firstboot --disabled
reboot
user --name=vagrant --plaintext --password vagrant
key --skip
%packages --nobase --ignoremissing --excludedocs
# vagrant needs this to copy initial files via scp
openssh-clients
sudo
kernel-headers
kernel-devel
gcc
make
perl
wget
nfs-utils
-fprintd-pam
-intltool
# unnecessary firmware
-aic94xx-firmware
-atmel-firmware
-b43-openfwwf
-bfa-firmware
-ipw2100-firmware
-ipw2200-firmware
-ivtv-firmware
-iwl100-firmware
-iwl1000-firmware
-iwl3945-firmware
-iwl4965-firmware
-iwl5000-firmware
-iwl5150-firmware
-iwl6000-firmware
-iwl6000g2a-firmware
-iwl6050-firmware
-libertas-usb8388-firmware
-ql2100-firmware
-ql2200-firmware
-ql23xx-firmware
-ql2400-firmware
-ql2500-firmware
-rt61pci-firmware
-rt73usb-firmware
-xorg-x11-drv-ati-firmware
-zd1211-firmware
%post
# Force to set SELinux to a permissive mode
sed -i -e 's/\(^SELINUX=\).*$/\1permissive/' /etc/selinux/config
# sudo
echo "%vagrant ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers.d/vagrant

View File

@ -1,13 +0,0 @@
package version
import (
"github.com/hashicorp/packer-plugin-sdk/version"
packerVersion "github.com/hashicorp/packer/version"
)
var AlicloudPluginVersion *version.PluginVersion
func init() {
AlicloudPluginVersion = version.InitializePluginVersion(
packerVersion.Version, packerVersion.VersionPrerelease)
}

View File

@ -1,21 +0,0 @@
The MIT License (MIT)
Copyright (c) Microsoft Corporation
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -1,16 +0,0 @@
Here's a list of things we like to get done in no particular order:
- [ ] Blob/image copy post-processor
- [ ] Blob/image rename post-processor
- [ ] SSH to private ip through subnet
- [ ] chroot builder
- [ ] support cross-storage account image source (i.e. pre-build blob copy)
- [ ] look up object id when using device code (graph api /me ?)
- [ ] device flow support for Windows
- [x] look up tenant id in all cases (see device flow code)
- [ ] look up resource group of storage account
- [ ] include all _data_ disks in artifact too
- [ ] windows sysprep provisioner (since it seems to generate a certain issue volume)
- [ ] allow arbitrary json patching for deployment document
- [ ] tag all resources with user-supplied tag
- [ ] managed disk support

View File

@ -1,118 +0,0 @@
{
"$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json",
"contentVersion": "1.0.0.0",
"parameters": {
"adminPassword": {
"type": "string"
},
"adminUsername": {
"type": "string"
},
"dnsNameForPublicIP": {
"type": "string"
},
"osDiskName": {
"type": "string"
},
"storageAccountBlobEndpoint": {
"type": "string"
},
"vmName": {
"type": "string"
},
"vmSize": {
"type": "string"
}
},
"resources": [
{
"apiVersion": "[variables('apiVersion')]",
"dependsOn": [],
"location": "[variables('location')]",
"name": "[variables('nicName')]",
"properties": {
"ipConfigurations": [
{
"name": "ipconfig",
"properties": {
"privateIPAllocationMethod": "Dynamic",
"subnet": {
"id": "[variables('subnetRef')]"
}
}
}
]
},
"type": "Microsoft.Network/networkInterfaces"
},
{
"apiVersion": "[variables('apiVersion')]",
"dependsOn": [
"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]"
],
"location": "[variables('location')]",
"name": "[parameters('vmName')]",
"properties": {
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": false
}
},
"hardwareProfile": {
"vmSize": "[parameters('vmSize')]"
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]"
}
]
},
"osProfile": {
"adminPassword": "[parameters('adminPassword')]",
"adminUsername": "[parameters('adminUsername')]",
"computerName": "[parameters('vmName')]",
"linuxConfiguration": {
"ssh": {
"publicKeys": [
{
"keyData": "",
"path": "[variables('sshKeyPath')]"
}
]
}
}
},
"storageProfile": {
"osDisk": {
"caching": "ReadWrite",
"createOption": "FromImage",
"image": {
"uri": "https://localhost/custom.vhd"
},
"name": "[parameters('osDiskName')]",
"osType": "Linux",
"vhd": {
"uri": "[concat(parameters('storageAccountBlobEndpoint'),variables('vmStorageAccountContainerName'),'/', parameters('osDiskName'),'.vhd')]"
}
}
}
},
"type": "Microsoft.Compute/virtualMachines"
}
],
"variables": {
"addressPrefix": "10.0.0.0/16",
"apiVersion": "2015-06-15",
"location": "[resourceGroup().location]",
"publicIPAddressType": "Dynamic",
"sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]",
"subnetAddressPrefix": "10.0.0.0/24",
"subnetName": "ignore",
"subnetRef": "[concat(variables('vnetID'),'/subnets/',variables('subnetName'))]",
"virtualNetworkName": "ignore",
"virtualNetworkResourceGroup": "ignore",
"vmStorageAccountContainerName": "images",
"vnetID": "[resourceId(variables('virtualNetworkResourceGroup'), 'Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]"
}
}

View File

@ -1,231 +0,0 @@
package arm
import (
"bytes"
"fmt"
"net/url"
"path"
"strings"
)
const (
BuilderId = "Azure.ResourceManagement.VMImage"
)
type AdditionalDiskArtifact struct {
AdditionalDiskUri string
AdditionalDiskUriReadOnlySas string
}
type Artifact struct {
// OS type: Linux, Windows
OSType string
// VHD
StorageAccountLocation string
OSDiskUri string
TemplateUri string
OSDiskUriReadOnlySas string
TemplateUriReadOnlySas string
// Managed Image
ManagedImageResourceGroupName string
ManagedImageName string
ManagedImageLocation string
ManagedImageId string
ManagedImageOSDiskSnapshotName string
ManagedImageDataDiskSnapshotPrefix string
// ARM resource id for Shared Image Gallery
ManagedImageSharedImageGalleryId string
// Additional Disks
AdditionalDisks *[]AdditionalDiskArtifact
// StateData should store data such as GeneratedData
// to be shared with post-processors
StateData map[string]interface{}
}
func NewManagedImageArtifact(osType, resourceGroup, name, location, id, osDiskSnapshotName, dataDiskSnapshotPrefix string, generatedData map[string]interface{}) (*Artifact, error) {
return &Artifact{
ManagedImageResourceGroupName: resourceGroup,
ManagedImageName: name,
ManagedImageLocation: location,
ManagedImageId: id,
OSType: osType,
ManagedImageOSDiskSnapshotName: osDiskSnapshotName,
ManagedImageDataDiskSnapshotPrefix: dataDiskSnapshotPrefix,
StateData: generatedData,
}, nil
}
func NewManagedImageArtifactWithSIGAsDestination(osType, resourceGroup, name, location, id, osDiskSnapshotName, dataDiskSnapshotPrefix, destinationSharedImageGalleryId string, generatedData map[string]interface{}) (*Artifact, error) {
return &Artifact{
ManagedImageResourceGroupName: resourceGroup,
ManagedImageName: name,
ManagedImageLocation: location,
ManagedImageId: id,
OSType: osType,
ManagedImageOSDiskSnapshotName: osDiskSnapshotName,
ManagedImageDataDiskSnapshotPrefix: dataDiskSnapshotPrefix,
ManagedImageSharedImageGalleryId: destinationSharedImageGalleryId,
StateData: generatedData,
}, nil
}
func NewArtifact(template *CaptureTemplate, getSasUrl func(name string) string, osType string, generatedData map[string]interface{}) (*Artifact, error) {
if template == nil {
return nil, fmt.Errorf("nil capture template")
}
if len(template.Resources) != 1 {
return nil, fmt.Errorf("malformed capture template, expected one resource")
}
vhdUri, err := url.Parse(template.Resources[0].Properties.StorageProfile.OSDisk.Image.Uri)
if err != nil {
return nil, err
}
templateUri, err := storageUriToTemplateUri(vhdUri)
if err != nil {
return nil, err
}
var additional_disks *[]AdditionalDiskArtifact
if template.Resources[0].Properties.StorageProfile.DataDisks != nil {
data_disks := make([]AdditionalDiskArtifact, len(template.Resources[0].Properties.StorageProfile.DataDisks))
for i, additionaldisk := range template.Resources[0].Properties.StorageProfile.DataDisks {
additionalVhdUri, err := url.Parse(additionaldisk.Image.Uri)
if err != nil {
return nil, err
}
data_disks[i].AdditionalDiskUri = additionalVhdUri.String()
data_disks[i].AdditionalDiskUriReadOnlySas = getSasUrl(getStorageUrlPath(additionalVhdUri))
}
additional_disks = &data_disks
}
return &Artifact{
OSType: osType,
OSDiskUri: vhdUri.String(),
OSDiskUriReadOnlySas: getSasUrl(getStorageUrlPath(vhdUri)),
TemplateUri: templateUri.String(),
TemplateUriReadOnlySas: getSasUrl(getStorageUrlPath(templateUri)),
AdditionalDisks: additional_disks,
StorageAccountLocation: template.Resources[0].Location,
StateData: generatedData,
}, nil
}
func getStorageUrlPath(u *url.URL) string {
parts := strings.Split(u.Path, "/")
return strings.Join(parts[3:], "/")
}
func storageUriToTemplateUri(su *url.URL) (*url.URL, error) {
// packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd -> 4085bb15-3644-4641-b9cd-f575918640b4
filename := path.Base(su.Path)
parts := strings.Split(filename, ".")
if len(parts) < 3 {
return nil, fmt.Errorf("malformed URL")
}
// packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd -> packer
prefixParts := strings.Split(parts[0], "-")
prefix := strings.Join(prefixParts[:len(prefixParts)-1], "-")
templateFilename := fmt.Sprintf("%s-vmTemplate.%s.json", prefix, parts[1])
// https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd"
// ->
// https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-vmTemplate.4085bb15-3644-4641-b9cd-f575918640b4.json"
return url.Parse(strings.Replace(su.String(), filename, templateFilename, 1))
}
func (a *Artifact) isManagedImage() bool {
return a.ManagedImageResourceGroupName != ""
}
func (*Artifact) BuilderId() string {
return BuilderId
}
func (*Artifact) Files() []string {
return []string{}
}
func (a *Artifact) Id() string {
if a.OSDiskUri != "" {
return a.OSDiskUri
}
return a.ManagedImageId
}
func (a *Artifact) State(name string) interface{} {
if _, ok := a.StateData[name]; ok {
return a.StateData[name]
}
switch name {
case "atlas.artifact.metadata":
return a.stateAtlasMetadata()
default:
return nil
}
}
func (a *Artifact) String() string {
var buf bytes.Buffer
buf.WriteString(fmt.Sprintf("%s:\n\n", a.BuilderId()))
buf.WriteString(fmt.Sprintf("OSType: %s\n", a.OSType))
if a.isManagedImage() {
buf.WriteString(fmt.Sprintf("ManagedImageResourceGroupName: %s\n", a.ManagedImageResourceGroupName))
buf.WriteString(fmt.Sprintf("ManagedImageName: %s\n", a.ManagedImageName))
buf.WriteString(fmt.Sprintf("ManagedImageId: %s\n", a.ManagedImageId))
buf.WriteString(fmt.Sprintf("ManagedImageLocation: %s\n", a.ManagedImageLocation))
if a.ManagedImageOSDiskSnapshotName != "" {
buf.WriteString(fmt.Sprintf("ManagedImageOSDiskSnapshotName: %s\n", a.ManagedImageOSDiskSnapshotName))
}
if a.ManagedImageDataDiskSnapshotPrefix != "" {
buf.WriteString(fmt.Sprintf("ManagedImageDataDiskSnapshotPrefix: %s\n", a.ManagedImageDataDiskSnapshotPrefix))
}
if a.ManagedImageSharedImageGalleryId != "" {
buf.WriteString(fmt.Sprintf("ManagedImageSharedImageGalleryId: %s\n", a.ManagedImageSharedImageGalleryId))
}
} else {
buf.WriteString(fmt.Sprintf("StorageAccountLocation: %s\n", a.StorageAccountLocation))
buf.WriteString(fmt.Sprintf("OSDiskUri: %s\n", a.OSDiskUri))
buf.WriteString(fmt.Sprintf("OSDiskUriReadOnlySas: %s\n", a.OSDiskUriReadOnlySas))
buf.WriteString(fmt.Sprintf("TemplateUri: %s\n", a.TemplateUri))
buf.WriteString(fmt.Sprintf("TemplateUriReadOnlySas: %s\n", a.TemplateUriReadOnlySas))
if a.AdditionalDisks != nil {
for i, additionaldisk := range *a.AdditionalDisks {
buf.WriteString(fmt.Sprintf("AdditionalDiskUri (datadisk-%d): %s\n", i+1, additionaldisk.AdditionalDiskUri))
buf.WriteString(fmt.Sprintf("AdditionalDiskUriReadOnlySas (datadisk-%d): %s\n", i+1, additionaldisk.AdditionalDiskUriReadOnlySas))
}
}
}
return buf.String()
}
func (*Artifact) Destroy() error {
return nil
}
func (a *Artifact) stateAtlasMetadata() interface{} {
metadata := make(map[string]string)
metadata["StorageAccountLocation"] = a.StorageAccountLocation
metadata["OSDiskUri"] = a.OSDiskUri
metadata["OSDiskUriReadOnlySas"] = a.OSDiskUriReadOnlySas
metadata["TemplateUri"] = a.TemplateUri
metadata["TemplateUriReadOnlySas"] = a.TemplateUriReadOnlySas
return metadata
}

View File

@ -1,429 +0,0 @@
package arm
import (
"fmt"
"strings"
"testing"
)
func getFakeSasUrl(name string) string {
return fmt.Sprintf("SAS-%s", name)
}
func generatedData() map[string]interface{} {
return make(map[string]interface{})
}
func TestArtifactIdVHD(t *testing.T) {
template := CaptureTemplate{
Resources: []CaptureResources{
{
Properties: CaptureProperties{
StorageProfile: CaptureStorageProfile{
OSDisk: CaptureDisk{
Image: CaptureUri{
Uri: "https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd",
},
},
},
},
Location: "southcentralus",
},
},
}
artifact, err := NewArtifact(&template, getFakeSasUrl, "Linux", generatedData())
if err != nil {
t.Fatalf("err=%s", err)
}
expected := "https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd"
result := artifact.Id()
if result != expected {
t.Fatalf("bad: %s", result)
}
}
func TestArtifactIDManagedImage(t *testing.T) {
artifact, err := NewManagedImageArtifact("Linux", "fakeResourceGroup", "fakeName", "fakeLocation", "fakeID", "fakeOsDiskSnapshotName", "fakeDataDiskSnapshotPrefix", generatedData())
if err != nil {
t.Fatalf("err=%s", err)
}
expected := `Azure.ResourceManagement.VMImage:
OSType: Linux
ManagedImageResourceGroupName: fakeResourceGroup
ManagedImageName: fakeName
ManagedImageId: fakeID
ManagedImageLocation: fakeLocation
ManagedImageOSDiskSnapshotName: fakeOsDiskSnapshotName
ManagedImageDataDiskSnapshotPrefix: fakeDataDiskSnapshotPrefix
`
result := artifact.String()
if result != expected {
t.Fatalf("bad: %s", result)
}
}
func TestArtifactIDManagedImageWithoutOSDiskSnapshotName(t *testing.T) {
artifact, err := NewManagedImageArtifact("Linux", "fakeResourceGroup", "fakeName", "fakeLocation", "fakeID", "", "fakeDataDiskSnapshotPrefix", generatedData())
if err != nil {
t.Fatalf("err=%s", err)
}
expected := `Azure.ResourceManagement.VMImage:
OSType: Linux
ManagedImageResourceGroupName: fakeResourceGroup
ManagedImageName: fakeName
ManagedImageId: fakeID
ManagedImageLocation: fakeLocation
ManagedImageDataDiskSnapshotPrefix: fakeDataDiskSnapshotPrefix
`
result := artifact.String()
if result != expected {
t.Fatalf("bad: %s", result)
}
}
func TestArtifactIDManagedImageWithoutDataDiskSnapshotPrefix(t *testing.T) {
artifact, err := NewManagedImageArtifact("Linux", "fakeResourceGroup", "fakeName", "fakeLocation", "fakeID", "fakeOsDiskSnapshotName", "", generatedData())
if err != nil {
t.Fatalf("err=%s", err)
}
expected := `Azure.ResourceManagement.VMImage:
OSType: Linux
ManagedImageResourceGroupName: fakeResourceGroup
ManagedImageName: fakeName
ManagedImageId: fakeID
ManagedImageLocation: fakeLocation
ManagedImageOSDiskSnapshotName: fakeOsDiskSnapshotName
`
result := artifact.String()
if result != expected {
t.Fatalf("bad: %s", result)
}
}
func TestArtifactIDManagedImageWithSharedImageGalleryId(t *testing.T) {
artifact, err := NewManagedImageArtifactWithSIGAsDestination("Linux", "fakeResourceGroup", "fakeName", "fakeLocation", "fakeID", "fakeOsDiskSnapshotName", "fakeDataDiskSnapshotPrefix", "fakeSharedImageGallery", generatedData())
if err != nil {
t.Fatalf("err=%s", err)
}
expected := `Azure.ResourceManagement.VMImage:
OSType: Linux
ManagedImageResourceGroupName: fakeResourceGroup
ManagedImageName: fakeName
ManagedImageId: fakeID
ManagedImageLocation: fakeLocation
ManagedImageOSDiskSnapshotName: fakeOsDiskSnapshotName
ManagedImageDataDiskSnapshotPrefix: fakeDataDiskSnapshotPrefix
ManagedImageSharedImageGalleryId: fakeSharedImageGallery
`
result := artifact.String()
if result != expected {
t.Fatalf("bad: %s", result)
}
}
func TestArtifactString(t *testing.T) {
template := CaptureTemplate{
Resources: []CaptureResources{
{
Properties: CaptureProperties{
StorageProfile: CaptureStorageProfile{
OSDisk: CaptureDisk{
Image: CaptureUri{
Uri: "https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd",
},
},
},
},
Location: "southcentralus",
},
},
}
artifact, err := NewArtifact(&template, getFakeSasUrl, "Linux", generatedData())
if err != nil {
t.Fatalf("err=%s", err)
}
testSubject := artifact.String()
if !strings.Contains(testSubject, "OSDiskUri: https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd") {
t.Errorf("Expected String() output to contain OSDiskUri")
}
if !strings.Contains(testSubject, "OSDiskUriReadOnlySas: SAS-Images/images/packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd") {
t.Errorf("Expected String() output to contain OSDiskUriReadOnlySas")
}
if !strings.Contains(testSubject, "TemplateUri: https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-vmTemplate.4085bb15-3644-4641-b9cd-f575918640b4.json") {
t.Errorf("Expected String() output to contain TemplateUri")
}
if !strings.Contains(testSubject, "TemplateUriReadOnlySas: SAS-Images/images/packer-vmTemplate.4085bb15-3644-4641-b9cd-f575918640b4.json") {
t.Errorf("Expected String() output to contain TemplateUriReadOnlySas")
}
if !strings.Contains(testSubject, "StorageAccountLocation: southcentralus") {
t.Errorf("Expected String() output to contain StorageAccountLocation")
}
if !strings.Contains(testSubject, "OSType: Linux") {
t.Errorf("Expected String() output to contain OSType")
}
}
func TestAdditionalDiskArtifactString(t *testing.T) {
template := CaptureTemplate{
Resources: []CaptureResources{
{
Properties: CaptureProperties{
StorageProfile: CaptureStorageProfile{
OSDisk: CaptureDisk{
Image: CaptureUri{
Uri: "https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd",
},
},
DataDisks: []CaptureDisk{
{
Image: CaptureUri{
Uri: "https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-datadisk-1.4085bb15-3644-4641-b9cd-f575918640b4.vhd",
},
},
},
},
},
Location: "southcentralus",
},
},
}
artifact, err := NewArtifact(&template, getFakeSasUrl, "Linux", generatedData())
if err != nil {
t.Fatalf("err=%s", err)
}
testSubject := artifact.String()
if !strings.Contains(testSubject, "OSDiskUri: https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd") {
t.Errorf("Expected String() output to contain OSDiskUri")
}
if !strings.Contains(testSubject, "OSDiskUriReadOnlySas: SAS-Images/images/packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd") {
t.Errorf("Expected String() output to contain OSDiskUriReadOnlySas")
}
if !strings.Contains(testSubject, "TemplateUri: https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-vmTemplate.4085bb15-3644-4641-b9cd-f575918640b4.json") {
t.Errorf("Expected String() output to contain TemplateUri")
}
if !strings.Contains(testSubject, "TemplateUriReadOnlySas: SAS-Images/images/packer-vmTemplate.4085bb15-3644-4641-b9cd-f575918640b4.json") {
t.Errorf("Expected String() output to contain TemplateUriReadOnlySas")
}
if !strings.Contains(testSubject, "StorageAccountLocation: southcentralus") {
t.Errorf("Expected String() output to contain StorageAccountLocation")
}
if !strings.Contains(testSubject, "OSType: Linux") {
t.Errorf("Expected String() output to contain OSType")
}
if !strings.Contains(testSubject, "AdditionalDiskUri (datadisk-1): https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-datadisk-1.4085bb15-3644-4641-b9cd-f575918640b4.vhd") {
t.Errorf("Expected String() output to contain AdditionalDiskUri")
}
if !strings.Contains(testSubject, "AdditionalDiskUriReadOnlySas (datadisk-1): SAS-Images/images/packer-datadisk-1.4085bb15-3644-4641-b9cd-f575918640b4.vhd") {
t.Errorf("Expected String() output to contain AdditionalDiskUriReadOnlySas")
}
}
func TestArtifactProperties(t *testing.T) {
template := CaptureTemplate{
Resources: []CaptureResources{
{
Properties: CaptureProperties{
StorageProfile: CaptureStorageProfile{
OSDisk: CaptureDisk{
Image: CaptureUri{
Uri: "https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd",
},
},
},
},
Location: "southcentralus",
},
},
}
testSubject, err := NewArtifact(&template, getFakeSasUrl, "Linux", generatedData())
if err != nil {
t.Fatalf("err=%s", err)
}
if testSubject.OSDiskUri != "https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd" {
t.Errorf("Expected template to be 'https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd', but got %s", testSubject.OSDiskUri)
}
if testSubject.OSDiskUriReadOnlySas != "SAS-Images/images/packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd" {
t.Errorf("Expected template to be 'SAS-Images/images/packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd', but got %s", testSubject.OSDiskUriReadOnlySas)
}
if testSubject.TemplateUri != "https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-vmTemplate.4085bb15-3644-4641-b9cd-f575918640b4.json" {
t.Errorf("Expected template to be 'https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-vmTemplate.4085bb15-3644-4641-b9cd-f575918640b4.json', but got %s", testSubject.TemplateUri)
}
if testSubject.TemplateUriReadOnlySas != "SAS-Images/images/packer-vmTemplate.4085bb15-3644-4641-b9cd-f575918640b4.json" {
t.Errorf("Expected template to be 'SAS-Images/images/packer-vmTemplate.4085bb15-3644-4641-b9cd-f575918640b4.json', but got %s", testSubject.TemplateUriReadOnlySas)
}
if testSubject.StorageAccountLocation != "southcentralus" {
t.Errorf("Expected StorageAccountLocation to be 'southcentral', but got %s", testSubject.StorageAccountLocation)
}
if testSubject.OSType != "Linux" {
t.Errorf("Expected OSType to be 'Linux', but got %s", testSubject.OSType)
}
}
func TestAdditionalDiskArtifactProperties(t *testing.T) {
template := CaptureTemplate{
Resources: []CaptureResources{
{
Properties: CaptureProperties{
StorageProfile: CaptureStorageProfile{
OSDisk: CaptureDisk{
Image: CaptureUri{
Uri: "https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd",
},
},
DataDisks: []CaptureDisk{
{
Image: CaptureUri{
Uri: "https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-datadisk-1.4085bb15-3644-4641-b9cd-f575918640b4.vhd",
},
},
},
},
},
Location: "southcentralus",
},
},
}
testSubject, err := NewArtifact(&template, getFakeSasUrl, "Linux", generatedData())
if err != nil {
t.Fatalf("err=%s", err)
}
if testSubject.OSDiskUri != "https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd" {
t.Errorf("Expected template to be 'https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd', but got %s", testSubject.OSDiskUri)
}
if testSubject.OSDiskUriReadOnlySas != "SAS-Images/images/packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd" {
t.Errorf("Expected template to be 'SAS-Images/images/packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd', but got %s", testSubject.OSDiskUriReadOnlySas)
}
if testSubject.TemplateUri != "https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-vmTemplate.4085bb15-3644-4641-b9cd-f575918640b4.json" {
t.Errorf("Expected template to be 'https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-vmTemplate.4085bb15-3644-4641-b9cd-f575918640b4.json', but got %s", testSubject.TemplateUri)
}
if testSubject.TemplateUriReadOnlySas != "SAS-Images/images/packer-vmTemplate.4085bb15-3644-4641-b9cd-f575918640b4.json" {
t.Errorf("Expected template to be 'SAS-Images/images/packer-vmTemplate.4085bb15-3644-4641-b9cd-f575918640b4.json', but got %s", testSubject.TemplateUriReadOnlySas)
}
if testSubject.StorageAccountLocation != "southcentralus" {
t.Errorf("Expected StorageAccountLocation to be 'southcentral', but got %s", testSubject.StorageAccountLocation)
}
if testSubject.OSType != "Linux" {
t.Errorf("Expected OSType to be 'Linux', but got %s", testSubject.OSType)
}
if testSubject.AdditionalDisks == nil {
t.Errorf("Expected AdditionalDisks to be not nil")
}
if len(*testSubject.AdditionalDisks) != 1 {
t.Errorf("Expected AdditionalDisks to have one additional disk, but got %d", len(*testSubject.AdditionalDisks))
}
if (*testSubject.AdditionalDisks)[0].AdditionalDiskUri != "https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-datadisk-1.4085bb15-3644-4641-b9cd-f575918640b4.vhd" {
t.Errorf("Expected additional disk uri to be 'https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-datadisk-1.4085bb15-3644-4641-b9cd-f575918640b4.vhd', but got %s", (*testSubject.AdditionalDisks)[0].AdditionalDiskUri)
}
if (*testSubject.AdditionalDisks)[0].AdditionalDiskUriReadOnlySas != "SAS-Images/images/packer-datadisk-1.4085bb15-3644-4641-b9cd-f575918640b4.vhd" {
t.Errorf("Expected additional disk sas to be 'SAS-Images/images/packer-datadisk-1.4085bb15-3644-4641-b9cd-f575918640b4.vhd', but got %s", (*testSubject.AdditionalDisks)[0].AdditionalDiskUriReadOnlySas)
}
}
func TestArtifactOverHyphenatedCaptureUri(t *testing.T) {
template := CaptureTemplate{
Resources: []CaptureResources{
{
Properties: CaptureProperties{
StorageProfile: CaptureStorageProfile{
OSDisk: CaptureDisk{
Image: CaptureUri{
Uri: "https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/pac-ker-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd",
},
},
},
},
Location: "southcentralus",
},
},
}
testSubject, err := NewArtifact(&template, getFakeSasUrl, "Linux", generatedData())
if err != nil {
t.Fatalf("err=%s", err)
}
if testSubject.TemplateUri != "https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/pac-ker-vmTemplate.4085bb15-3644-4641-b9cd-f575918640b4.json" {
t.Errorf("Expected template to be 'https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/pac-ker-vmTemplate.4085bb15-3644-4641-b9cd-f575918640b4.json', but got %s", testSubject.TemplateUri)
}
}
func TestArtifactRejectMalformedTemplates(t *testing.T) {
template := CaptureTemplate{}
_, err := NewArtifact(&template, getFakeSasUrl, "Linux", generatedData())
if err == nil {
t.Fatalf("Expected artifact creation to fail, but it succeeded.")
}
}
func TestArtifactRejectMalformedStorageUri(t *testing.T) {
template := CaptureTemplate{
Resources: []CaptureResources{
{
Properties: CaptureProperties{
StorageProfile: CaptureStorageProfile{
OSDisk: CaptureDisk{
Image: CaptureUri{
Uri: "bark",
},
},
},
},
},
},
}
_, err := NewArtifact(&template, getFakeSasUrl, "Linux", generatedData())
if err == nil {
t.Fatalf("Expected artifact creation to fail, but it succeeded.")
}
}
func TestArtifactState_StateData(t *testing.T) {
expectedData := "this is the data"
artifact := &Artifact{
StateData: map[string]interface{}{"state_data": expectedData},
}
// Valid state
result := artifact.State("state_data")
if result != expectedData {
t.Fatalf("Bad: State data was %s instead of %s", result, expectedData)
}
// Invalid state
result = artifact.State("invalid_key")
if result != nil {
t.Fatalf("Bad: State should be nil for invalid state data name")
}
// Nil StateData should not fail and should return nil
artifact = &Artifact{}
result = artifact.State("key")
if result != nil {
t.Fatalf("Bad: State should be nil for nil StateData")
}
}

View File

@ -1,311 +0,0 @@
package arm
import (
"context"
"encoding/json"
"fmt"
"math"
"net/http"
"net/url"
"os"
"strconv"
"time"
"github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute"
newCompute "github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2019-03-01/compute"
"github.com/Azure/azure-sdk-for-go/services/keyvault/mgmt/2018-02-14/keyvault"
"github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network"
"github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources"
armStorage "github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage"
"github.com/Azure/azure-sdk-for-go/storage"
"github.com/Azure/go-autorest/autorest"
"github.com/Azure/go-autorest/autorest/adal"
"github.com/Azure/go-autorest/autorest/azure"
"github.com/hashicorp/packer-plugin-sdk/useragent"
"github.com/hashicorp/packer/builder/azure/common"
"github.com/hashicorp/packer/builder/azure/version"
)
const (
EnvPackerLogAzureMaxLen = "PACKER_LOG_AZURE_MAXLEN"
)
type AzureClient struct {
storage.BlobStorageClient
resources.DeploymentsClient
resources.DeploymentOperationsClient
resources.GroupsClient
network.PublicIPAddressesClient
network.InterfacesClient
network.SubnetsClient
network.VirtualNetworksClient
network.SecurityGroupsClient
compute.ImagesClient
compute.VirtualMachinesClient
common.VaultClient
armStorage.AccountsClient
compute.DisksClient
compute.SnapshotsClient
newCompute.GalleryImageVersionsClient
newCompute.GalleryImagesClient
InspectorMaxLength int
Template *CaptureTemplate
LastError azureErrorResponse
VaultClientDelete keyvault.VaultsClient
}
func getCaptureResponse(body string) *CaptureTemplate {
var operation CaptureOperation
err := json.Unmarshal([]byte(body), &operation)
if err != nil {
return nil
}
if operation.Properties != nil && operation.Properties.Output != nil {
return operation.Properties.Output
}
return nil
}
// HACK(chrboum): This method is a hack. It was written to work around this issue
// (https://github.com/Azure/azure-sdk-for-go/issues/307) and to an extent this
// issue (https://github.com/Azure/azure-rest-api-specs/issues/188).
//
// Capturing a VM is a long running operation that requires polling. There are
// couple different forms of polling, and the end result of a poll operation is
// discarded by the SDK. It is expected that any discarded data can be re-fetched,
// so discarding it has minimal impact. Unfortunately, there is no way to re-fetch
// the template returned by a capture call that I am aware of.
//
// If the second issue were fixed the VM ID would be included when GET'ing a VM. The
// VM ID could be used to locate the captured VHD, and captured template.
// Unfortunately, the VM ID is not included so this method cannot be used either.
//
// This code captures the template and saves it to the client (the AzureClient type).
// It expects that the capture API is called only once, or rather you only care that the
// last call's value is important because subsequent requests are not persisted. There
// is no care given to multiple threads writing this value because for our use case
// it does not matter.
func templateCapture(client *AzureClient) autorest.RespondDecorator {
return func(r autorest.Responder) autorest.Responder {
return autorest.ResponderFunc(func(resp *http.Response) error {
body, bodyString := handleBody(resp.Body, math.MaxInt64)
resp.Body = body
captureTemplate := getCaptureResponse(bodyString)
if captureTemplate != nil {
client.Template = captureTemplate
}
return r.Respond(resp)
})
}
}
func errorCapture(client *AzureClient) autorest.RespondDecorator {
return func(r autorest.Responder) autorest.Responder {
return autorest.ResponderFunc(func(resp *http.Response) error {
body, bodyString := handleBody(resp.Body, math.MaxInt64)
resp.Body = body
errorResponse := newAzureErrorResponse(bodyString)
if errorResponse != nil {
client.LastError = *errorResponse
}
return r.Respond(resp)
})
}
}
// WAITING(chrboum): I have logged https://github.com/Azure/azure-sdk-for-go/issues/311 to get this
// method included in the SDK. It has been accepted, and I'll cut over to the official way
// once it ships.
func byConcatDecorators(decorators ...autorest.RespondDecorator) autorest.RespondDecorator {
return func(r autorest.Responder) autorest.Responder {
return autorest.DecorateResponder(r, decorators...)
}
}
func NewAzureClient(subscriptionID, sigSubscriptionID, resourceGroupName, storageAccountName string,
cloud *azure.Environment, sharedGalleryTimeout time.Duration, pollingDuration time.Duration,
servicePrincipalToken, servicePrincipalTokenVault *adal.ServicePrincipalToken) (*AzureClient, error) {
var azureClient = &AzureClient{}
maxlen := getInspectorMaxLength()
azureClient.DeploymentsClient = resources.NewDeploymentsClientWithBaseURI(cloud.ResourceManagerEndpoint, subscriptionID)
azureClient.DeploymentsClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken)
azureClient.DeploymentsClient.RequestInspector = withInspection(maxlen)
azureClient.DeploymentsClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient))
azureClient.DeploymentsClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(version.AzurePluginVersion.FormattedVersion()), azureClient.DeploymentsClient.UserAgent)
azureClient.DeploymentsClient.Client.PollingDuration = pollingDuration
azureClient.DeploymentOperationsClient = resources.NewDeploymentOperationsClientWithBaseURI(cloud.ResourceManagerEndpoint, subscriptionID)
azureClient.DeploymentOperationsClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken)
azureClient.DeploymentOperationsClient.RequestInspector = withInspection(maxlen)
azureClient.DeploymentOperationsClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient))
azureClient.DeploymentOperationsClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(version.AzurePluginVersion.FormattedVersion()), azureClient.DeploymentOperationsClient.UserAgent)
azureClient.DeploymentOperationsClient.Client.PollingDuration = pollingDuration
azureClient.DisksClient = compute.NewDisksClientWithBaseURI(cloud.ResourceManagerEndpoint, subscriptionID)
azureClient.DisksClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken)
azureClient.DisksClient.RequestInspector = withInspection(maxlen)
azureClient.DisksClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient))
azureClient.DisksClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(version.AzurePluginVersion.FormattedVersion()), azureClient.DisksClient.UserAgent)
azureClient.DisksClient.Client.PollingDuration = pollingDuration
azureClient.GroupsClient = resources.NewGroupsClientWithBaseURI(cloud.ResourceManagerEndpoint, subscriptionID)
azureClient.GroupsClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken)
azureClient.GroupsClient.RequestInspector = withInspection(maxlen)
azureClient.GroupsClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient))
azureClient.GroupsClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(version.AzurePluginVersion.FormattedVersion()), azureClient.GroupsClient.UserAgent)
azureClient.GroupsClient.Client.PollingDuration = pollingDuration
azureClient.ImagesClient = compute.NewImagesClientWithBaseURI(cloud.ResourceManagerEndpoint, subscriptionID)
azureClient.ImagesClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken)
azureClient.ImagesClient.RequestInspector = withInspection(maxlen)
azureClient.ImagesClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient))
azureClient.ImagesClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(version.AzurePluginVersion.FormattedVersion()), azureClient.ImagesClient.UserAgent)
azureClient.ImagesClient.Client.PollingDuration = pollingDuration
azureClient.InterfacesClient = network.NewInterfacesClientWithBaseURI(cloud.ResourceManagerEndpoint, subscriptionID)
azureClient.InterfacesClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken)
azureClient.InterfacesClient.RequestInspector = withInspection(maxlen)
azureClient.InterfacesClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient))
azureClient.InterfacesClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(version.AzurePluginVersion.FormattedVersion()), azureClient.InterfacesClient.UserAgent)
azureClient.InterfacesClient.Client.PollingDuration = pollingDuration
azureClient.SubnetsClient = network.NewSubnetsClientWithBaseURI(cloud.ResourceManagerEndpoint, subscriptionID)
azureClient.SubnetsClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken)
azureClient.SubnetsClient.RequestInspector = withInspection(maxlen)
azureClient.SubnetsClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient))
azureClient.SubnetsClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(version.AzurePluginVersion.FormattedVersion()), azureClient.SubnetsClient.UserAgent)
azureClient.SubnetsClient.Client.PollingDuration = pollingDuration
azureClient.VirtualNetworksClient = network.NewVirtualNetworksClientWithBaseURI(cloud.ResourceManagerEndpoint, subscriptionID)
azureClient.VirtualNetworksClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken)
azureClient.VirtualNetworksClient.RequestInspector = withInspection(maxlen)
azureClient.VirtualNetworksClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient))
azureClient.VirtualNetworksClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(version.AzurePluginVersion.FormattedVersion()), azureClient.VirtualNetworksClient.UserAgent)
azureClient.VirtualNetworksClient.Client.PollingDuration = pollingDuration
azureClient.SecurityGroupsClient = network.NewSecurityGroupsClientWithBaseURI(cloud.ResourceManagerEndpoint, subscriptionID)
azureClient.SecurityGroupsClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken)
azureClient.SecurityGroupsClient.RequestInspector = withInspection(maxlen)
azureClient.SecurityGroupsClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient))
azureClient.SecurityGroupsClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(version.AzurePluginVersion.FormattedVersion()), azureClient.SecurityGroupsClient.UserAgent)
azureClient.PublicIPAddressesClient = network.NewPublicIPAddressesClientWithBaseURI(cloud.ResourceManagerEndpoint, subscriptionID)
azureClient.PublicIPAddressesClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken)
azureClient.PublicIPAddressesClient.RequestInspector = withInspection(maxlen)
azureClient.PublicIPAddressesClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient))
azureClient.PublicIPAddressesClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(version.AzurePluginVersion.FormattedVersion()), azureClient.PublicIPAddressesClient.UserAgent)
azureClient.PublicIPAddressesClient.Client.PollingDuration = pollingDuration
azureClient.VirtualMachinesClient = compute.NewVirtualMachinesClientWithBaseURI(cloud.ResourceManagerEndpoint, subscriptionID)
azureClient.VirtualMachinesClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken)
azureClient.VirtualMachinesClient.RequestInspector = withInspection(maxlen)
azureClient.VirtualMachinesClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), templateCapture(azureClient), errorCapture(azureClient))
azureClient.VirtualMachinesClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(version.AzurePluginVersion.FormattedVersion()), azureClient.VirtualMachinesClient.UserAgent)
azureClient.VirtualMachinesClient.Client.PollingDuration = pollingDuration
azureClient.SnapshotsClient = compute.NewSnapshotsClientWithBaseURI(cloud.ResourceManagerEndpoint, subscriptionID)
azureClient.SnapshotsClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken)
azureClient.SnapshotsClient.RequestInspector = withInspection(maxlen)
azureClient.SnapshotsClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient))
azureClient.SnapshotsClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(version.AzurePluginVersion.FormattedVersion()), azureClient.SnapshotsClient.UserAgent)
azureClient.SnapshotsClient.Client.PollingDuration = pollingDuration
azureClient.AccountsClient = armStorage.NewAccountsClientWithBaseURI(cloud.ResourceManagerEndpoint, subscriptionID)
azureClient.AccountsClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken)
azureClient.AccountsClient.RequestInspector = withInspection(maxlen)
azureClient.AccountsClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient))
azureClient.AccountsClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(version.AzurePluginVersion.FormattedVersion()), azureClient.AccountsClient.UserAgent)
azureClient.AccountsClient.Client.PollingDuration = pollingDuration
azureClient.GalleryImageVersionsClient = newCompute.NewGalleryImageVersionsClientWithBaseURI(cloud.ResourceManagerEndpoint, subscriptionID)
azureClient.GalleryImageVersionsClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken)
azureClient.GalleryImageVersionsClient.RequestInspector = withInspection(maxlen)
azureClient.GalleryImageVersionsClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient))
azureClient.GalleryImageVersionsClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(version.AzurePluginVersion.FormattedVersion()), azureClient.GalleryImageVersionsClient.UserAgent)
azureClient.GalleryImageVersionsClient.Client.PollingDuration = sharedGalleryTimeout
if sigSubscriptionID != "" {
azureClient.GalleryImageVersionsClient.SubscriptionID = sigSubscriptionID
}
azureClient.GalleryImagesClient = newCompute.NewGalleryImagesClientWithBaseURI(cloud.ResourceManagerEndpoint, subscriptionID)
azureClient.GalleryImagesClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken)
azureClient.GalleryImagesClient.RequestInspector = withInspection(maxlen)
azureClient.GalleryImagesClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient))
azureClient.GalleryImagesClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(version.AzurePluginVersion.FormattedVersion()), azureClient.GalleryImagesClient.UserAgent)
azureClient.GalleryImagesClient.Client.PollingDuration = pollingDuration
if sigSubscriptionID != "" {
azureClient.GalleryImagesClient.SubscriptionID = sigSubscriptionID
}
keyVaultURL, err := url.Parse(cloud.KeyVaultEndpoint)
if err != nil {
return nil, err
}
azureClient.VaultClient = common.NewVaultClient(*keyVaultURL)
azureClient.VaultClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalTokenVault)
azureClient.VaultClient.RequestInspector = withInspection(maxlen)
azureClient.VaultClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient))
azureClient.VaultClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(version.AzurePluginVersion.FormattedVersion()), azureClient.VaultClient.UserAgent)
azureClient.VaultClient.Client.PollingDuration = pollingDuration
// This client is different than the above because it manages the vault
// itself rather than the contents of the vault.
azureClient.VaultClientDelete = keyvault.NewVaultsClient(subscriptionID)
azureClient.VaultClientDelete.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken)
azureClient.VaultClientDelete.RequestInspector = withInspection(maxlen)
azureClient.VaultClientDelete.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient))
azureClient.VaultClientDelete.UserAgent = fmt.Sprintf("%s %s", useragent.String(version.AzurePluginVersion.FormattedVersion()), azureClient.VaultClientDelete.UserAgent)
azureClient.VaultClientDelete.Client.PollingDuration = pollingDuration
// If this is a managed disk build, this should be ignored.
if resourceGroupName != "" && storageAccountName != "" {
accountKeys, err := azureClient.AccountsClient.ListKeys(context.TODO(), resourceGroupName, storageAccountName)
if err != nil {
return nil, err
}
storageClient, err := storage.NewClient(
storageAccountName,
*(*accountKeys.Keys)[0].Value,
cloud.StorageEndpointSuffix,
storage.DefaultAPIVersion,
true /*useHttps*/)
if err != nil {
return nil, err
}
azureClient.BlobStorageClient = storageClient.GetBlobService()
}
return azureClient, nil
}
func getInspectorMaxLength() int64 {
value, ok := os.LookupEnv(EnvPackerLogAzureMaxLen)
if !ok {
return math.MaxInt64
}
i, err := strconv.ParseInt(value, 10, 64)
if err != nil {
return 0
}
if i < 0 {
return 0
}
return i
}

View File

@ -1,67 +0,0 @@
package arm
import (
"bytes"
"encoding/json"
"fmt"
)
type azureErrorDetails struct {
Code string `json:"code"`
Message string `json:"message"`
Details []azureErrorDetails `json:"details"`
}
type azureErrorResponse struct {
ErrorDetails azureErrorDetails `json:"error"`
}
func newAzureErrorResponse(s string) *azureErrorResponse {
var errorResponse azureErrorResponse
err := json.Unmarshal([]byte(s), &errorResponse)
if err == nil {
return &errorResponse
}
return nil
}
func (e *azureErrorDetails) isEmpty() bool {
return e.Code == ""
}
func (e *azureErrorResponse) isEmpty() bool {
return e.ErrorDetails.isEmpty()
}
func (e *azureErrorResponse) Error() string {
var buf bytes.Buffer
//buf.WriteString("-=-=- ERROR -=-=-")
formatAzureErrorResponse(e.ErrorDetails, &buf, "")
//buf.WriteString("-=-=- ERROR -=-=-")
return buf.String()
}
// format a Azure Error Response by recursing through the JSON structure.
//
// Errors may contain nested errors, which are JSON documents that have been
// serialized and escaped. Keep following this nesting all the way down...
func formatAzureErrorResponse(error azureErrorDetails, buf *bytes.Buffer, indent string) {
if error.isEmpty() {
return
}
buf.WriteString(fmt.Sprintf("ERROR: %s-> %s : %s\n", indent, error.Code, error.Message))
for _, x := range error.Details {
newIndent := fmt.Sprintf("%s ", indent)
var aer azureErrorResponse
err := json.Unmarshal([]byte(x.Message), &aer)
if err == nil {
buf.WriteString(fmt.Sprintf("ERROR: %s-> %s\n", newIndent, x.Code))
formatAzureErrorResponse(aer.ErrorDetails, buf, newIndent)
} else {
buf.WriteString(fmt.Sprintf("ERROR: %s-> %s : %s\n", newIndent, x.Code, x.Message))
}
}
}

View File

@ -1,4 +0,0 @@
ERROR: -> DeploymentFailed : At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.
ERROR: -> BadRequest
ERROR: -> InvalidRequestFormat : Cannot parse the request.
ERROR: -> InvalidJson : Error converting value "playground" to type 'Microsoft.WindowsAzure.Networking.Nrp.Frontend.Contract.Csm.Public.IpAllocationMethod'. Path 'properties.publicIPAllocationMethod', line 1, position 130.

View File

@ -1 +0,0 @@
ERROR: -> ResourceNotFound : The Resource 'Microsoft.Compute/images/PackerUbuntuImage' under resource group 'packer-test00' was not found.

View File

@ -1,77 +0,0 @@
package arm
import (
"strings"
"testing"
approvaltests "github.com/approvals/go-approval-tests"
"github.com/hashicorp/packer-plugin-sdk/json"
)
const AzureErrorSimple = `{"error":{"code":"ResourceNotFound","message":"The Resource 'Microsoft.Compute/images/PackerUbuntuImage' under resource group 'packer-test00' was not found."}}`
const AzureErrorNested = `{"status":"Failed","error":{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.","details":[{"code":"BadRequest","message":"{\r\n \"error\": {\r\n \"code\": \"InvalidRequestFormat\",\r\n \"message\": \"Cannot parse the request.\",\r\n \"details\": [\r\n {\r\n \"code\": \"InvalidJson\",\r\n \"message\": \"Error converting value \\\"playground\\\" to type 'Microsoft.WindowsAzure.Networking.Nrp.Frontend.Contract.Csm.Public.IpAllocationMethod'. Path 'properties.publicIPAllocationMethod', line 1, position 130.\"\r\n }\r\n ]\r\n }\r\n}"}]}}`
func TestAzureErrorSimpleShouldUnmarshal(t *testing.T) {
var azureErrorResponse azureErrorResponse
err := json.Unmarshal([]byte(AzureErrorSimple), &azureErrorResponse)
if err != nil {
t.Fatal(err)
}
if azureErrorResponse.ErrorDetails.Code != "ResourceNotFound" {
t.Errorf("Error.Code")
}
if azureErrorResponse.ErrorDetails.Message != "The Resource 'Microsoft.Compute/images/PackerUbuntuImage' under resource group 'packer-test00' was not found." {
t.Errorf("Error.Message")
}
}
func TestAzureErrorNestedShouldUnmarshal(t *testing.T) {
var azureError azureErrorResponse
err := json.Unmarshal([]byte(AzureErrorNested), &azureError)
if err != nil {
t.Fatal(err)
}
if azureError.ErrorDetails.Code != "DeploymentFailed" {
t.Errorf("Error.Code")
}
if !strings.HasPrefix(azureError.ErrorDetails.Message, "At least one resource deployment operation failed") {
t.Errorf("Error.Message")
}
}
func TestAzureErrorEmptyShouldFormat(t *testing.T) {
var aer azureErrorResponse
s := aer.Error()
if s != "" {
t.Fatalf("Expected \"\", but got %s", aer.Error())
}
}
func TestAzureErrorSimpleShouldFormat(t *testing.T) {
var azureErrorResponse azureErrorResponse
err := json.Unmarshal([]byte(AzureErrorSimple), &azureErrorResponse)
if err != nil {
t.Fatal(err)
}
err = approvaltests.VerifyString(t, azureErrorResponse.Error())
if err != nil {
t.Fatal(err)
}
}
func TestAzureErrorNestedShouldFormat(t *testing.T) {
var azureErrorResponse azureErrorResponse
err := json.Unmarshal([]byte(AzureErrorNested), &azureErrorResponse)
if err != nil {
t.Fatal(err)
}
err = approvaltests.VerifyString(t, azureErrorResponse.Error())
if err != nil {
t.Fatal(err)
}
}

View File

@ -1,477 +0,0 @@
package arm
import (
"context"
"errors"
"fmt"
"log"
"os"
"runtime"
"strings"
"time"
armstorage "github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage"
"github.com/Azure/azure-sdk-for-go/storage"
"github.com/Azure/go-autorest/autorest/adal"
"github.com/dgrijalva/jwt-go"
"github.com/hashicorp/hcl/v2/hcldec"
"github.com/hashicorp/packer-plugin-sdk/communicator"
"github.com/hashicorp/packer-plugin-sdk/multistep"
"github.com/hashicorp/packer-plugin-sdk/multistep/commonsteps"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
packerAzureCommon "github.com/hashicorp/packer/builder/azure/common"
"github.com/hashicorp/packer/builder/azure/common/constants"
"github.com/hashicorp/packer/builder/azure/common/lin"
)
type Builder struct {
config Config
stateBag multistep.StateBag
runner multistep.Runner
}
const (
DefaultSasBlobContainer = "system/Microsoft.Compute"
DefaultSecretName = "packerKeyVaultSecret"
)
func (b *Builder) ConfigSpec() hcldec.ObjectSpec { return b.config.FlatMapstructure().HCL2Spec() }
func (b *Builder) Prepare(raws ...interface{}) ([]string, []string, error) {
warnings, errs := b.config.Prepare(raws...)
if errs != nil {
return nil, warnings, errs
}
b.stateBag = new(multistep.BasicStateBag)
b.configureStateBag(b.stateBag)
b.setTemplateParameters(b.stateBag)
b.setImageParameters(b.stateBag)
return nil, warnings, errs
}
func (b *Builder) Run(ctx context.Context, ui packersdk.Ui, hook packersdk.Hook) (packersdk.Artifact, error) {
ui.Say("Running builder ...")
ctx, cancel := context.WithCancel(ctx)
defer cancel()
// FillParameters function captures authType and sets defaults.
err := b.config.ClientConfig.FillParameters()
if err != nil {
return nil, err
}
//When running Packer on an Azure instance using Managed Identity, FillParameters will update SubscriptionID from the instance
// so lets make sure to update our state bag with the valid subscriptionID.
if b.config.isManagedImage() && b.config.SharedGalleryDestination.SigDestinationGalleryName != "" {
b.stateBag.Put(constants.ArmManagedImageSubscription, b.config.ClientConfig.SubscriptionID)
}
log.Print(":: Configuration")
packerAzureCommon.DumpConfig(&b.config, func(s string) { log.Print(s) })
b.stateBag.Put("hook", hook)
b.stateBag.Put(constants.Ui, ui)
spnCloud, spnKeyVault, err := b.getServicePrincipalTokens(ui.Say)
if err != nil {
return nil, err
}
ui.Message("Creating Azure Resource Manager (ARM) client ...")
azureClient, err := NewAzureClient(
b.config.ClientConfig.SubscriptionID,
b.config.SharedGalleryDestination.SigDestinationSubscription,
b.config.ResourceGroupName,
b.config.StorageAccount,
b.config.ClientConfig.CloudEnvironment(),
b.config.SharedGalleryTimeout,
b.config.PollingDurationTimeout,
spnCloud,
spnKeyVault)
if err != nil {
return nil, err
}
resolver := newResourceResolver(azureClient)
if err := resolver.Resolve(&b.config); err != nil {
return nil, err
}
if b.config.ClientConfig.ObjectID == "" {
b.config.ClientConfig.ObjectID = getObjectIdFromToken(ui, spnCloud)
} else {
ui.Message("You have provided Object_ID which is no longer needed, azure packer builder determines this dynamically from the authentication token")
}
if b.config.ClientConfig.ObjectID == "" && b.config.OSType != constants.Target_Linux {
return nil, fmt.Errorf("could not determine the ObjectID for the user, which is required for Windows builds")
}
if b.config.isManagedImage() {
_, err := azureClient.GroupsClient.Get(ctx, b.config.ManagedImageResourceGroupName)
if err != nil {
return nil, fmt.Errorf("Cannot locate the managed image resource group %s.", b.config.ManagedImageResourceGroupName)
}
// If a managed image already exists it cannot be overwritten.
_, err = azureClient.ImagesClient.Get(ctx, b.config.ManagedImageResourceGroupName, b.config.ManagedImageName, "")
if err == nil {
if b.config.PackerForce {
ui.Say(fmt.Sprintf("the managed image named %s already exists, but deleting it due to -force flag", b.config.ManagedImageName))
f, err := azureClient.ImagesClient.Delete(ctx, b.config.ManagedImageResourceGroupName, b.config.ManagedImageName)
if err == nil {
err = f.WaitForCompletionRef(ctx, azureClient.ImagesClient.Client)
}
if err != nil {
return nil, fmt.Errorf("failed to delete the managed image named %s : %s", b.config.ManagedImageName, azureClient.LastError.Error())
}
} else {
return nil, fmt.Errorf("the managed image named %s already exists in the resource group %s, use the -force option to automatically delete it.", b.config.ManagedImageName, b.config.ManagedImageResourceGroupName)
}
}
} else {
// User is not using Managed Images to build, warning message here that this path is being deprecated
ui.Error("Warning: You are using Azure Packer Builder to create VHDs which is being deprecated, consider using Managed Images. Learn more https://www.packer.io/docs/builders/azure/arm#azure-arm-builder-specific-options")
}
if b.config.BuildResourceGroupName != "" {
group, err := azureClient.GroupsClient.Get(ctx, b.config.BuildResourceGroupName)
if err != nil {
return nil, fmt.Errorf("Cannot locate the existing build resource resource group %s.", b.config.BuildResourceGroupName)
}
b.config.Location = *group.Location
}
b.config.validateLocationZoneResiliency(ui.Say)
if b.config.StorageAccount != "" {
account, err := b.getBlobAccount(ctx, azureClient, b.config.ResourceGroupName, b.config.StorageAccount)
if err != nil {
return nil, err
}
b.config.storageAccountBlobEndpoint = *account.AccountProperties.PrimaryEndpoints.Blob
if !equalLocation(*account.Location, b.config.Location) {
return nil, fmt.Errorf("The storage account is located in %s, but the build will take place in %s. The locations must be identical", *account.Location, b.config.Location)
}
}
endpointConnectType := PublicEndpoint
if b.isPublicPrivateNetworkCommunication() && b.isPrivateNetworkCommunication() {
endpointConnectType = PublicEndpointInPrivateNetwork
} else if b.isPrivateNetworkCommunication() {
endpointConnectType = PrivateEndpoint
}
b.setRuntimeParameters(b.stateBag)
b.setTemplateParameters(b.stateBag)
b.setImageParameters(b.stateBag)
deploymentName := b.stateBag.Get(constants.ArmDeploymentName).(string)
// For Managed Images, validate that Shared Gallery Image exists before publishing to SIG
if b.config.isManagedImage() && b.config.SharedGalleryDestination.SigDestinationGalleryName != "" {
_, err = azureClient.GalleryImagesClient.Get(ctx, b.config.SharedGalleryDestination.SigDestinationResourceGroup, b.config.SharedGalleryDestination.SigDestinationGalleryName, b.config.SharedGalleryDestination.SigDestinationImageName)
if err != nil {
return nil, fmt.Errorf("the Shared Gallery Image to which to publish the managed image version to does not exist in the resource group %s", b.config.SharedGalleryDestination.SigDestinationResourceGroup)
}
// SIG requires that replication regions include the region in which the Managed Image resides
managedImageLocation := normalizeAzureRegion(b.stateBag.Get(constants.ArmLocation).(string))
foundMandatoryReplicationRegion := false
var normalizedReplicationRegions []string
for _, region := range b.config.SharedGalleryDestination.SigDestinationReplicationRegions {
// change region to lower-case and strip spaces
normalizedRegion := normalizeAzureRegion(region)
normalizedReplicationRegions = append(normalizedReplicationRegions, normalizedRegion)
if strings.EqualFold(normalizedRegion, managedImageLocation) {
foundMandatoryReplicationRegion = true
continue
}
}
if foundMandatoryReplicationRegion == false {
b.config.SharedGalleryDestination.SigDestinationReplicationRegions = append(normalizedReplicationRegions, managedImageLocation)
}
b.stateBag.Put(constants.ArmManagedImageSharedGalleryReplicationRegions, b.config.SharedGalleryDestination.SigDestinationReplicationRegions)
}
var steps []multistep.Step
if b.config.OSType == constants.Target_Linux {
steps = []multistep.Step{
NewStepCreateResourceGroup(azureClient, ui),
NewStepValidateTemplate(azureClient, ui, &b.config, GetVirtualMachineDeployment),
NewStepDeployTemplate(azureClient, ui, &b.config, deploymentName, GetVirtualMachineDeployment),
NewStepGetIPAddress(azureClient, ui, endpointConnectType),
&communicator.StepConnectSSH{
Config: &b.config.Comm,
Host: lin.SSHHost,
SSHConfig: b.config.Comm.SSHConfigFunc(),
},
&commonsteps.StepProvision{},
&commonsteps.StepCleanupTempKeys{
Comm: &b.config.Comm,
},
NewStepGetOSDisk(azureClient, ui),
NewStepGetAdditionalDisks(azureClient, ui),
NewStepPowerOffCompute(azureClient, ui),
NewStepSnapshotOSDisk(azureClient, ui, &b.config),
NewStepSnapshotDataDisks(azureClient, ui, &b.config),
NewStepCaptureImage(azureClient, ui),
NewStepPublishToSharedImageGallery(azureClient, ui, &b.config),
}
} else if b.config.OSType == constants.Target_Windows {
steps = []multistep.Step{
NewStepCreateResourceGroup(azureClient, ui),
}
if b.config.BuildKeyVaultName == "" {
keyVaultDeploymentName := b.stateBag.Get(constants.ArmKeyVaultDeploymentName).(string)
steps = append(steps,
NewStepValidateTemplate(azureClient, ui, &b.config, GetKeyVaultDeployment),
NewStepDeployTemplate(azureClient, ui, &b.config, keyVaultDeploymentName, GetKeyVaultDeployment),
)
} else {
steps = append(steps, NewStepCertificateInKeyVault(&azureClient.VaultClient, ui, &b.config))
}
steps = append(steps,
NewStepGetCertificate(azureClient, ui),
NewStepSetCertificate(&b.config, ui),
NewStepValidateTemplate(azureClient, ui, &b.config, GetVirtualMachineDeployment),
NewStepDeployTemplate(azureClient, ui, &b.config, deploymentName, GetVirtualMachineDeployment),
NewStepGetIPAddress(azureClient, ui, endpointConnectType),
&communicator.StepConnectWinRM{
Config: &b.config.Comm,
Host: func(stateBag multistep.StateBag) (string, error) {
return stateBag.Get(constants.SSHHost).(string), nil
},
WinRMConfig: func(multistep.StateBag) (*communicator.WinRMConfig, error) {
return &communicator.WinRMConfig{
Username: b.config.UserName,
Password: b.config.Password,
}, nil
},
},
&commonsteps.StepProvision{},
NewStepGetOSDisk(azureClient, ui),
NewStepGetAdditionalDisks(azureClient, ui),
NewStepPowerOffCompute(azureClient, ui),
NewStepSnapshotOSDisk(azureClient, ui, &b.config),
NewStepSnapshotDataDisks(azureClient, ui, &b.config),
NewStepCaptureImage(azureClient, ui),
NewStepPublishToSharedImageGallery(azureClient, ui, &b.config),
)
} else {
return nil, fmt.Errorf("Builder does not support the os_type '%s'", b.config.OSType)
}
if b.config.PackerDebug {
ui.Message(fmt.Sprintf("temp admin user: '%s'", b.config.UserName))
ui.Message(fmt.Sprintf("temp admin password: '%s'", b.config.Password))
if len(b.config.Comm.SSHPrivateKey) != 0 {
debugKeyPath := fmt.Sprintf("%s-%s.pem", b.config.PackerBuildName, b.config.tmpComputeName)
ui.Message(fmt.Sprintf("temp ssh key: %s", debugKeyPath))
b.writeSSHPrivateKey(ui, debugKeyPath)
}
}
b.runner = commonsteps.NewRunner(steps, b.config.PackerConfig, ui)
b.runner.Run(ctx, b.stateBag)
// Report any errors.
if rawErr, ok := b.stateBag.GetOk(constants.Error); ok {
return nil, rawErr.(error)
}
// If we were interrupted or cancelled, then just exit.
if _, ok := b.stateBag.GetOk(multistep.StateCancelled); ok {
return nil, errors.New("Build was cancelled.")
}
if _, ok := b.stateBag.GetOk(multistep.StateHalted); ok {
return nil, errors.New("Build was halted.")
}
generatedData := map[string]interface{}{"generated_data": b.stateBag.Get("generated_data")}
if b.config.isManagedImage() {
managedImageID := fmt.Sprintf("/subscriptions/%s/resourceGroups/%s/providers/Microsoft.Compute/images/%s",
b.config.ClientConfig.SubscriptionID, b.config.ManagedImageResourceGroupName, b.config.ManagedImageName)
if b.config.SharedGalleryDestination.SigDestinationGalleryName != "" {
return NewManagedImageArtifactWithSIGAsDestination(b.config.OSType,
b.config.ManagedImageResourceGroupName,
b.config.ManagedImageName,
b.config.Location,
managedImageID,
b.config.ManagedImageOSDiskSnapshotName,
b.config.ManagedImageDataDiskSnapshotPrefix,
b.stateBag.Get(constants.ArmManagedImageSharedGalleryId).(string),
generatedData)
}
return NewManagedImageArtifact(b.config.OSType,
b.config.ManagedImageResourceGroupName,
b.config.ManagedImageName,
b.config.Location,
managedImageID,
b.config.ManagedImageOSDiskSnapshotName,
b.config.ManagedImageDataDiskSnapshotPrefix,
generatedData)
} else if template, ok := b.stateBag.GetOk(constants.ArmCaptureTemplate); ok {
return NewArtifact(
template.(*CaptureTemplate),
func(name string) string {
blob := azureClient.BlobStorageClient.GetContainerReference(DefaultSasBlobContainer).GetBlobReference(name)
options := storage.BlobSASOptions{}
options.BlobServiceSASPermissions.Read = true
options.Expiry = time.Now().AddDate(0, 1, 0).UTC() // one month
sasUrl, _ := blob.GetSASURI(options)
return sasUrl
},
b.config.OSType,
generatedData)
}
return &Artifact{
StateData: generatedData,
}, nil
}
func (b *Builder) writeSSHPrivateKey(ui packersdk.Ui, debugKeyPath string) {
f, err := os.Create(debugKeyPath)
if err != nil {
ui.Say(fmt.Sprintf("Error saving debug key: %s", err))
}
defer f.Close()
// Write the key out
if _, err := f.Write(b.config.Comm.SSHPrivateKey); err != nil {
ui.Say(fmt.Sprintf("Error saving debug key: %s", err))
return
}
// Chmod it so that it is SSH ready
if runtime.GOOS != "windows" {
if err := f.Chmod(0600); err != nil {
ui.Say(fmt.Sprintf("Error setting permissions of debug key: %s", err))
}
}
}
func (b *Builder) isPublicPrivateNetworkCommunication() bool {
return DefaultPrivateVirtualNetworkWithPublicIp != b.config.PrivateVirtualNetworkWithPublicIp
}
func (b *Builder) isPrivateNetworkCommunication() bool {
return b.config.VirtualNetworkName != ""
}
func equalLocation(location1, location2 string) bool {
return strings.EqualFold(canonicalizeLocation(location1), canonicalizeLocation(location2))
}
func canonicalizeLocation(location string) string {
return strings.Replace(location, " ", "", -1)
}
func (b *Builder) getBlobAccount(ctx context.Context, client *AzureClient, resourceGroupName string, storageAccountName string) (*armstorage.Account, error) {
account, err := client.AccountsClient.GetProperties(ctx, resourceGroupName, storageAccountName)
if err != nil {
return nil, err
}
return &account, err
}
func (b *Builder) configureStateBag(stateBag multistep.StateBag) {
stateBag.Put(constants.AuthorizedKey, b.config.sshAuthorizedKey)
stateBag.Put(constants.ArmTags, packerAzureCommon.MapToAzureTags(b.config.AzureTags))
stateBag.Put(constants.ArmComputeName, b.config.tmpComputeName)
stateBag.Put(constants.ArmDeploymentName, b.config.tmpDeploymentName)
if b.config.OSType == constants.Target_Windows && b.config.BuildKeyVaultName == "" {
stateBag.Put(constants.ArmKeyVaultDeploymentName, fmt.Sprintf("kv%s", b.config.tmpDeploymentName))
}
stateBag.Put(constants.ArmKeyVaultName, b.config.tmpKeyVaultName)
stateBag.Put(constants.ArmIsExistingKeyVault, false)
if b.config.BuildKeyVaultName != "" {
stateBag.Put(constants.ArmKeyVaultName, b.config.BuildKeyVaultName)
b.config.tmpKeyVaultName = b.config.BuildKeyVaultName
stateBag.Put(constants.ArmIsExistingKeyVault, true)
}
stateBag.Put(constants.ArmNicName, b.config.tmpNicName)
stateBag.Put(constants.ArmPublicIPAddressName, b.config.tmpPublicIPAddressName)
stateBag.Put(constants.ArmResourceGroupName, b.config.BuildResourceGroupName)
stateBag.Put(constants.ArmIsExistingResourceGroup, true)
if b.config.tmpResourceGroupName != "" {
stateBag.Put(constants.ArmResourceGroupName, b.config.tmpResourceGroupName)
stateBag.Put(constants.ArmIsExistingResourceGroup, false)
if b.config.BuildResourceGroupName != "" {
stateBag.Put(constants.ArmDoubleResourceGroupNameSet, true)
}
}
stateBag.Put(constants.ArmStorageAccountName, b.config.StorageAccount)
stateBag.Put(constants.ArmIsManagedImage, b.config.isManagedImage())
stateBag.Put(constants.ArmManagedImageResourceGroupName, b.config.ManagedImageResourceGroupName)
stateBag.Put(constants.ArmManagedImageName, b.config.ManagedImageName)
stateBag.Put(constants.ArmManagedImageOSDiskSnapshotName, b.config.ManagedImageOSDiskSnapshotName)
stateBag.Put(constants.ArmManagedImageDataDiskSnapshotPrefix, b.config.ManagedImageDataDiskSnapshotPrefix)
stateBag.Put(constants.ArmAsyncResourceGroupDelete, b.config.AsyncResourceGroupDelete)
if b.config.isManagedImage() && b.config.SharedGalleryDestination.SigDestinationGalleryName != "" {
stateBag.Put(constants.ArmManagedImageSigPublishResourceGroup, b.config.SharedGalleryDestination.SigDestinationResourceGroup)
stateBag.Put(constants.ArmManagedImageSharedGalleryName, b.config.SharedGalleryDestination.SigDestinationGalleryName)
stateBag.Put(constants.ArmManagedImageSharedGalleryImageName, b.config.SharedGalleryDestination.SigDestinationImageName)
stateBag.Put(constants.ArmManagedImageSharedGalleryImageVersion, b.config.SharedGalleryDestination.SigDestinationImageVersion)
stateBag.Put(constants.ArmManagedImageSubscription, b.config.ClientConfig.SubscriptionID)
stateBag.Put(constants.ArmManagedImageSharedGalleryImageVersionEndOfLifeDate, b.config.SharedGalleryImageVersionEndOfLifeDate)
stateBag.Put(constants.ArmManagedImageSharedGalleryImageVersionReplicaCount, b.config.SharedGalleryImageVersionReplicaCount)
stateBag.Put(constants.ArmManagedImageSharedGalleryImageVersionExcludeFromLatest, b.config.SharedGalleryImageVersionExcludeFromLatest)
}
}
// Parameters that are only known at runtime after querying Azure.
func (b *Builder) setRuntimeParameters(stateBag multistep.StateBag) {
stateBag.Put(constants.ArmLocation, b.config.Location)
}
func (b *Builder) setTemplateParameters(stateBag multistep.StateBag) {
stateBag.Put(constants.ArmVirtualMachineCaptureParameters, b.config.toVirtualMachineCaptureParameters())
}
func (b *Builder) setImageParameters(stateBag multistep.StateBag) {
stateBag.Put(constants.ArmImageParameters, b.config.toImageParameters())
}
func (b *Builder) getServicePrincipalTokens(say func(string)) (*adal.ServicePrincipalToken, *adal.ServicePrincipalToken, error) {
return b.config.ClientConfig.GetServicePrincipalTokens(say)
}
func getObjectIdFromToken(ui packersdk.Ui, token *adal.ServicePrincipalToken) string {
claims := jwt.MapClaims{}
var p jwt.Parser
var err error
_, _, err = p.ParseUnverified(token.OAuthToken(), claims)
if err != nil {
ui.Error(fmt.Sprintf("Failed to parse the token,Error: %s", err.Error()))
return ""
}
oid, _ := claims["oid"].(string)
return oid
}
func normalizeAzureRegion(name string) string {
return strings.ToLower(strings.Replace(name, " ", "", -1))
}

View File

@ -1,513 +0,0 @@
package arm
// these tests require the following variables to be set,
// although some test will only use a subset:
//
// * ARM_CLIENT_ID
// * ARM_CLIENT_SECRET
// * ARM_SUBSCRIPTION_ID
// * ARM_STORAGE_ACCOUNT
//
// The subscription in question should have a resource group
// called "packer-acceptance-test" in "South Central US" region. The
// storage account referred to in the above variable should
// be inside this resource group and in "South Central US" as well.
//
// In addition, the PACKER_ACC variable should also be set to
// a non-empty value to enable Packer acceptance tests and the
// options "-v -timeout 90m" should be provided to the test
// command, e.g.:
// go test -v -timeout 90m -run TestBuilderAcc_.*
import (
"bytes"
"context"
"errors"
"fmt"
"os"
"strings"
"testing"
"github.com/Azure/go-autorest/autorest"
"github.com/Azure/go-autorest/autorest/azure/auth"
builderT "github.com/hashicorp/packer-plugin-sdk/acctest"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
)
const DeviceLoginAcceptanceTest = "DEVICELOGIN_TEST"
func TestBuilderAcc_ManagedDisk_Windows(t *testing.T) {
builderT.Test(t, builderT.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Builder: &Builder{},
Template: testBuilderAccManagedDiskWindows,
})
}
func TestBuilderAcc_ManagedDisk_Windows_Build_Resource_Group(t *testing.T) {
builderT.Test(t, builderT.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Builder: &Builder{},
Template: testBuilderAccManagedDiskWindowsBuildResourceGroup,
})
}
func TestBuilderAcc_ManagedDisk_Windows_Build_Resource_Group_Additional_Disk(t *testing.T) {
builderT.Test(t, builderT.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Builder: &Builder{},
Template: testBuilderAccManagedDiskWindowsBuildResourceGroupAdditionalDisk,
})
}
func TestBuilderAcc_ManagedDisk_Windows_DeviceLogin(t *testing.T) {
if os.Getenv(DeviceLoginAcceptanceTest) == "" {
t.Skip(fmt.Sprintf(
"Device Login Acceptance tests skipped unless env '%s' set, as its requires manual step during execution",
DeviceLoginAcceptanceTest))
return
}
builderT.Test(t, builderT.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Builder: &Builder{},
Template: testBuilderAccManagedDiskWindowsDeviceLogin,
})
}
func TestBuilderAcc_ManagedDisk_Linux(t *testing.T) {
builderT.Test(t, builderT.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Builder: &Builder{},
Template: testBuilderAccManagedDiskLinux,
})
}
func TestBuilderAcc_ManagedDisk_Linux_DeviceLogin(t *testing.T) {
if os.Getenv(DeviceLoginAcceptanceTest) == "" {
t.Skip(fmt.Sprintf(
"Device Login Acceptance tests skipped unless env '%s' set, as its requires manual step during execution",
DeviceLoginAcceptanceTest))
return
}
builderT.Test(t, builderT.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Builder: &Builder{},
Template: testBuilderAccManagedDiskLinuxDeviceLogin,
})
}
func TestBuilderAcc_ManagedDisk_Linux_AzureCLI(t *testing.T) {
if os.Getenv("AZURE_CLI_AUTH") == "" {
t.Skip("Azure CLI Acceptance tests skipped unless env 'AZURE_CLI_AUTH' is set, and an active `az login` session has been established")
return
}
var b Builder
builderT.Test(t, builderT.TestCase{
PreCheck: func() { testAuthPreCheck(t) },
Builder: &b,
Template: testBuilderAccManagedDiskLinuxAzureCLI,
Check: func([]packersdk.Artifact) error {
checkTemporaryGroupDeleted(t, &b)
return nil
},
})
}
func TestBuilderAcc_Blob_Windows(t *testing.T) {
builderT.Test(t, builderT.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Builder: &Builder{},
Template: testBuilderAccBlobWindows,
})
}
func TestBuilderAcc_Blob_Linux(t *testing.T) {
var b Builder
builderT.Test(t, builderT.TestCase{
PreCheck: func() { testAuthPreCheck(t) },
Builder: &b,
Template: testBuilderAccBlobLinux,
Check: func([]packersdk.Artifact) error {
checkUnmanagedVHDDeleted(t, &b)
return nil
},
})
}
func testAccPreCheck(*testing.T) {}
func testAuthPreCheck(t *testing.T) {
_, err := auth.NewAuthorizerFromEnvironment()
if err != nil {
t.Fatalf("failed to auth to azure: %s", err)
}
}
func checkTemporaryGroupDeleted(t *testing.T, b *Builder) {
ui := testUi()
spnCloud, spnKeyVault, err := b.getServicePrincipalTokens(ui.Say)
if err != nil {
t.Fatalf("failed getting azure tokens: %s", err)
}
ui.Message("Creating test Azure Resource Manager (ARM) client ...")
azureClient, err := NewAzureClient(
b.config.ClientConfig.SubscriptionID,
b.config.SharedGalleryDestination.SigDestinationSubscription,
b.config.ResourceGroupName,
b.config.StorageAccount,
b.config.ClientConfig.CloudEnvironment(),
b.config.SharedGalleryTimeout,
b.config.PollingDurationTimeout,
spnCloud,
spnKeyVault)
if err != nil {
t.Fatalf("failed to create azure client: %s", err)
}
// Validate resource group has been deleted
_, err = azureClient.GroupsClient.Get(context.Background(), b.config.tmpResourceGroupName)
if err == nil || !resourceNotFound(err) {
t.Fatalf("failed validating resource group deletion: %s", err)
}
}
func checkUnmanagedVHDDeleted(t *testing.T, b *Builder) {
ui := testUi()
spnCloud, spnKeyVault, err := b.getServicePrincipalTokens(ui.Say)
if err != nil {
t.Fatalf("failed getting azure tokens: %s", err)
}
azureClient, err := NewAzureClient(
b.config.ClientConfig.SubscriptionID,
b.config.SharedGalleryDestination.SigDestinationSubscription,
b.config.ResourceGroupName,
b.config.StorageAccount,
b.config.ClientConfig.CloudEnvironment(),
b.config.SharedGalleryTimeout,
b.config.PollingDurationTimeout,
spnCloud,
spnKeyVault)
if err != nil {
t.Fatalf("failed to create azure client: %s", err)
}
// validate temporary os blob was deleted
blob := azureClient.BlobStorageClient.GetContainerReference("images").GetBlobReference(b.config.tmpOSDiskName)
_, err = blob.BreakLease(nil)
if err != nil && !strings.Contains(err.Error(), "BlobNotFound") {
t.Fatalf("failed validating deletion of unmanaged vhd: %s", err)
}
// Validate resource group has been deleted
_, err = azureClient.GroupsClient.Get(context.Background(), b.config.tmpResourceGroupName)
if err == nil || !resourceNotFound(err) {
t.Fatalf("failed validating resource group deletion: %s", err)
}
}
func resourceNotFound(err error) bool {
derr := autorest.DetailedError{}
return errors.As(err, &derr) && derr.StatusCode == 404
}
func testUi() *packersdk.BasicUi {
return &packersdk.BasicUi{
Reader: new(bytes.Buffer),
Writer: new(bytes.Buffer),
ErrorWriter: new(bytes.Buffer),
}
}
const testBuilderAccManagedDiskWindows = `
{
"variables": {
"client_id": "{{env ` + "`ARM_CLIENT_ID`" + `}}",
"client_secret": "{{env ` + "`ARM_CLIENT_SECRET`" + `}}",
"subscription_id": "{{env ` + "`ARM_SUBSCRIPTION_ID`" + `}}"
},
"builders": [{
"type": "test",
"client_id": "{{user ` + "`client_id`" + `}}",
"client_secret": "{{user ` + "`client_secret`" + `}}",
"subscription_id": "{{user ` + "`subscription_id`" + `}}",
"managed_image_resource_group_name": "packer-acceptance-test",
"managed_image_name": "testBuilderAccManagedDiskWindows-{{timestamp}}",
"os_type": "Windows",
"image_publisher": "MicrosoftWindowsServer",
"image_offer": "WindowsServer",
"image_sku": "2012-R2-Datacenter",
"communicator": "winrm",
"winrm_use_ssl": "true",
"winrm_insecure": "true",
"winrm_timeout": "3m",
"winrm_username": "packer",
"async_resourcegroup_delete": "true",
"location": "South Central US",
"vm_size": "Standard_DS2_v2"
}]
}
`
const testBuilderAccManagedDiskWindowsBuildResourceGroup = `
{
"variables": {
"client_id": "{{env ` + "`ARM_CLIENT_ID`" + `}}",
"client_secret": "{{env ` + "`ARM_CLIENT_SECRET`" + `}}",
"subscription_id": "{{env ` + "`ARM_SUBSCRIPTION_ID`" + `}}"
},
"builders": [{
"type": "test",
"client_id": "{{user ` + "`client_id`" + `}}",
"client_secret": "{{user ` + "`client_secret`" + `}}",
"subscription_id": "{{user ` + "`subscription_id`" + `}}",
"build_resource_group_name" : "packer-acceptance-test",
"managed_image_resource_group_name": "packer-acceptance-test",
"managed_image_name": "testBuilderAccManagedDiskWindowsBuildResourceGroup-{{timestamp}}",
"os_type": "Windows",
"image_publisher": "MicrosoftWindowsServer",
"image_offer": "WindowsServer",
"image_sku": "2012-R2-Datacenter",
"communicator": "winrm",
"winrm_use_ssl": "true",
"winrm_insecure": "true",
"winrm_timeout": "3m",
"winrm_username": "packer",
"async_resourcegroup_delete": "true",
"vm_size": "Standard_DS2_v2"
}]
}
`
const testBuilderAccManagedDiskWindowsBuildResourceGroupAdditionalDisk = `
{
"variables": {
"client_id": "{{env ` + "`ARM_CLIENT_ID`" + `}}",
"client_secret": "{{env ` + "`ARM_CLIENT_SECRET`" + `}}",
"subscription_id": "{{env ` + "`ARM_SUBSCRIPTION_ID`" + `}}"
},
"builders": [{
"type": "test",
"client_id": "{{user ` + "`client_id`" + `}}",
"client_secret": "{{user ` + "`client_secret`" + `}}",
"subscription_id": "{{user ` + "`subscription_id`" + `}}",
"build_resource_group_name" : "packer-acceptance-test",
"managed_image_resource_group_name": "packer-acceptance-test",
"managed_image_name": "testBuilderAccManagedDiskWindowsBuildResourceGroupAdditionDisk-{{timestamp}}",
"os_type": "Windows",
"image_publisher": "MicrosoftWindowsServer",
"image_offer": "WindowsServer",
"image_sku": "2012-R2-Datacenter",
"communicator": "winrm",
"winrm_use_ssl": "true",
"winrm_insecure": "true",
"winrm_timeout": "3m",
"winrm_username": "packer",
"async_resourcegroup_delete": "true",
"vm_size": "Standard_DS2_v2",
"disk_additional_size": [10,15]
}]
}
`
const testBuilderAccManagedDiskWindowsDeviceLogin = `
{
"variables": {
"subscription_id": "{{env ` + "`ARM_SUBSCRIPTION_ID`" + `}}"
},
"builders": [{
"type": "test",
"subscription_id": "{{user ` + "`subscription_id`" + `}}",
"managed_image_resource_group_name": "packer-acceptance-test",
"managed_image_name": "testBuilderAccManagedDiskWindowsDeviceLogin-{{timestamp}}",
"os_type": "Windows",
"image_publisher": "MicrosoftWindowsServer",
"image_offer": "WindowsServer",
"image_sku": "2012-R2-Datacenter",
"communicator": "winrm",
"winrm_use_ssl": "true",
"winrm_insecure": "true",
"winrm_timeout": "3m",
"winrm_username": "packer",
"location": "South Central US",
"vm_size": "Standard_DS2_v2"
}]
}
`
const testBuilderAccManagedDiskLinux = `
{
"variables": {
"client_id": "{{env ` + "`ARM_CLIENT_ID`" + `}}",
"client_secret": "{{env ` + "`ARM_CLIENT_SECRET`" + `}}",
"subscription_id": "{{env ` + "`ARM_SUBSCRIPTION_ID`" + `}}"
},
"builders": [{
"type": "test",
"client_id": "{{user ` + "`client_id`" + `}}",
"client_secret": "{{user ` + "`client_secret`" + `}}",
"subscription_id": "{{user ` + "`subscription_id`" + `}}",
"managed_image_resource_group_name": "packer-acceptance-test",
"managed_image_name": "testBuilderAccManagedDiskLinux-{{timestamp}}",
"os_type": "Linux",
"image_publisher": "Canonical",
"image_offer": "UbuntuServer",
"image_sku": "16.04-LTS",
"location": "South Central US",
"vm_size": "Standard_DS2_v2",
"azure_tags": {
"env": "testing",
"builder": "packer"
}
}]
}
`
const testBuilderAccManagedDiskLinuxDeviceLogin = `
{
"variables": {
"subscription_id": "{{env ` + "`ARM_SUBSCRIPTION_ID`" + `}}"
},
"builders": [{
"type": "test",
"subscription_id": "{{user ` + "`subscription_id`" + `}}",
"managed_image_resource_group_name": "packer-acceptance-test",
"managed_image_name": "testBuilderAccManagedDiskLinuxDeviceLogin-{{timestamp}}",
"os_type": "Linux",
"image_publisher": "Canonical",
"image_offer": "UbuntuServer",
"image_sku": "16.04-LTS",
"async_resourcegroup_delete": "true",
"location": "South Central US",
"vm_size": "Standard_DS2_v2"
}]
}
`
const testBuilderAccBlobWindows = `
{
"variables": {
"client_id": "{{env ` + "`ARM_CLIENT_ID`" + `}}",
"client_secret": "{{env ` + "`ARM_CLIENT_SECRET`" + `}}",
"subscription_id": "{{env ` + "`ARM_SUBSCRIPTION_ID`" + `}}",
"storage_account": "{{env ` + "`ARM_STORAGE_ACCOUNT`" + `}}"
},
"builders": [{
"type": "test",
"client_id": "{{user ` + "`client_id`" + `}}",
"client_secret": "{{user ` + "`client_secret`" + `}}",
"subscription_id": "{{user ` + "`subscription_id`" + `}}",
"storage_account": "{{user ` + "`storage_account`" + `}}",
"resource_group_name": "packer-acceptance-test",
"capture_container_name": "test",
"capture_name_prefix": "testBuilderAccBlobWin",
"os_type": "Windows",
"image_publisher": "MicrosoftWindowsServer",
"image_offer": "WindowsServer",
"image_sku": "2012-R2-Datacenter",
"communicator": "winrm",
"winrm_use_ssl": "true",
"winrm_insecure": "true",
"winrm_timeout": "3m",
"winrm_username": "packer",
"location": "South Central US",
"vm_size": "Standard_DS2_v2"
}]
}
`
const testBuilderAccBlobLinux = `
{
"variables": {
"client_id": "{{env ` + "`ARM_CLIENT_ID`" + `}}",
"client_secret": "{{env ` + "`ARM_CLIENT_SECRET`" + `}}",
"subscription_id": "{{env ` + "`ARM_SUBSCRIPTION_ID`" + `}}",
"storage_account": "{{env ` + "`ARM_STORAGE_ACCOUNT`" + `}}"
},
"builders": [{
"type": "test",
"client_id": "{{user ` + "`client_id`" + `}}",
"client_secret": "{{user ` + "`client_secret`" + `}}",
"subscription_id": "{{user ` + "`subscription_id`" + `}}",
"storage_account": "{{user ` + "`storage_account`" + `}}",
"resource_group_name": "packer-acceptance-test",
"capture_container_name": "test",
"capture_name_prefix": "testBuilderAccBlobLinux",
"os_type": "Linux",
"image_publisher": "Canonical",
"image_offer": "UbuntuServer",
"image_sku": "16.04-LTS",
"location": "South Central US",
"vm_size": "Standard_DS2_v2"
}]
}
`
const testBuilderAccManagedDiskLinuxAzureCLI = `
{
"builders": [{
"type": "test",
"use_azure_cli_auth": true,
"managed_image_resource_group_name": "packer-acceptance-test",
"managed_image_name": "testBuilderAccManagedDiskLinuxAzureCLI-{{timestamp}}",
"temp_resource_group_name": "packer-acceptance-test-managed-cli",
"os_type": "Linux",
"image_publisher": "Canonical",
"image_offer": "UbuntuServer",
"image_sku": "16.04-LTS",
"location": "South Central US",
"vm_size": "Standard_DS2_v2",
"azure_tags": {
"env": "testing",
"builder": "packer"
}
}]
}
`

View File

@ -1,67 +0,0 @@
package arm
import (
"testing"
"github.com/hashicorp/packer/builder/azure/common/constants"
)
func TestStateBagShouldBePopulatedExpectedValues(t *testing.T) {
var testSubject Builder
_, _, err := testSubject.Prepare(getArmBuilderConfiguration(), getPackerConfiguration())
if err != nil {
t.Fatalf("failed to prepare: %s", err)
}
var expectedStateBagKeys = []string{
constants.AuthorizedKey,
constants.ArmTags,
constants.ArmComputeName,
constants.ArmDeploymentName,
constants.ArmNicName,
constants.ArmResourceGroupName,
constants.ArmStorageAccountName,
constants.ArmVirtualMachineCaptureParameters,
constants.ArmPublicIPAddressName,
constants.ArmAsyncResourceGroupDelete,
}
for _, v := range expectedStateBagKeys {
if _, ok := testSubject.stateBag.GetOk(v); ok == false {
t.Errorf("Expected the builder's state bag to contain '%s', but it did not.", v)
}
}
}
func TestStateBagShouldPoluateExpectedTags(t *testing.T) {
var testSubject Builder
expectedTags := map[string]string{
"env": "test",
"builder": "packer",
}
armConfig := getArmBuilderConfiguration()
armConfig["azure_tags"] = expectedTags
_, _, err := testSubject.Prepare(armConfig, getPackerConfiguration())
if err != nil {
t.Fatalf("failed to prepare: %s", err)
}
tags, ok := testSubject.stateBag.Get(constants.ArmTags).(map[string]*string)
if !ok {
t.Errorf("Expected the builder's state bag to contain tags of type %T, but didn't.", testSubject.config.AzureTags)
}
if len(tags) != len(expectedTags) {
t.Errorf("expect tags from state to be the same length as tags from config")
}
for k, v := range tags {
if expectedTags[k] != *v {
t.Errorf("expect tag value of %s to be %s, but got %s", k, expectedTags[k], *v)
}
}
}

View File

@ -1,84 +0,0 @@
package arm
type CaptureTemplateParameter struct {
Type string `json:"type"`
DefaultValue string `json:"defaultValue,omitempty"`
}
type CaptureHardwareProfile struct {
VMSize string `json:"vmSize"`
}
type CaptureUri struct {
Uri string `json:"uri"`
}
type CaptureDisk struct {
OSType string `json:"osType"`
Name string `json:"name"`
Image CaptureUri `json:"image"`
Vhd CaptureUri `json:"vhd"`
CreateOption string `json:"createOption"`
Caching string `json:"caching"`
}
type CaptureStorageProfile struct {
OSDisk CaptureDisk `json:"osDisk"`
DataDisks []CaptureDisk `json:"dataDisks"`
}
type CaptureOSProfile struct {
ComputerName string `json:"computerName"`
AdminUsername string `json:"adminUsername"`
AdminPassword string `json:"adminPassword"`
}
type CaptureNetworkInterface struct {
Id string `json:"id"`
}
type CaptureNetworkProfile struct {
NetworkInterfaces []CaptureNetworkInterface `json:"networkInterfaces"`
}
type CaptureBootDiagnostics struct {
Enabled bool `json:"enabled"`
}
type CaptureDiagnosticProfile struct {
BootDiagnostics CaptureBootDiagnostics `json:"bootDiagnostics"`
}
type CaptureProperties struct {
HardwareProfile CaptureHardwareProfile `json:"hardwareProfile"`
StorageProfile CaptureStorageProfile `json:"storageProfile"`
OSProfile CaptureOSProfile `json:"osProfile"`
NetworkProfile CaptureNetworkProfile `json:"networkProfile"`
DiagnosticsProfile CaptureDiagnosticProfile `json:"diagnosticsProfile"`
ProvisioningState int `json:"provisioningState"`
}
type CaptureResources struct {
ApiVersion string `json:"apiVersion"`
Name string `json:"name"`
Type string `json:"type"`
Location string `json:"location"`
Properties CaptureProperties `json:"properties"`
}
type CaptureTemplate struct {
Schema string `json:"$schema"`
ContentVersion string `json:"contentVersion"`
Parameters map[string]CaptureTemplateParameter `json:"parameters"`
Resources []CaptureResources `json:"resources"`
}
type CaptureOperationProperties struct {
Output *CaptureTemplate `json:"output"`
}
type CaptureOperation struct {
OperationId string `json:"operationId"`
Status string `json:"status"`
Properties *CaptureOperationProperties `json:"properties"`
}

View File

@ -1,272 +0,0 @@
package arm
import (
"encoding/json"
"testing"
)
var captureTemplate01 = `{
"operationId": "ac1c7c38-a591-41b3-89bd-ea39fceace1b",
"status": "Succeeded",
"startTime": "2016-04-04T21:07:25.2900874+00:00",
"endTime": "2016-04-04T21:07:26.4776321+00:00",
"properties": {
"output": {
"$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/VM_IP.json",
"contentVersion": "1.0.0.0",
"parameters": {
"vmName": {
"type": "string"
},
"vmSize": {
"type": "string",
"defaultValue": "Standard_A2"
},
"adminUserName": {
"type": "string"
},
"adminPassword": {
"type": "securestring"
},
"networkInterfaceId": {
"type": "string"
}
},
"resources": [
{
"apiVersion": "2015-06-15",
"properties": {
"hardwareProfile": {
"vmSize": "[parameters('vmSize')]"
},
"storageProfile": {
"osDisk": {
"osType": "Linux",
"name": "packer-osDisk.32118633-6dc9-449f-83b6-a7d2983bec14.vhd",
"createOption": "FromImage",
"image": {
"uri": "http://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-osDisk.32118633-6dc9-449f-83b6-a7d2983bec14.vhd"
},
"vhd": {
"uri": "http://storage.blob.core.windows.net/vmcontainerce1a1b75-f480-47cb-8e6e-55142e4a5f68/osDisk.ce1a1b75-f480-47cb-8e6e-55142e4a5f68.vhd"
},
"caching": "ReadWrite"
},
"dataDisks": [
{
"lun": 0,
"name": "packer-datadisk-0.32118633-6dc9-449f-83b6-a7d2983bec14.vhd",
"createOption": "Empty",
"image": {
"uri": "http://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-datadisk-0.32118633-6dc9-449f-83b6-a7d2983bec14.vhd"
},
"vhd": {
"uri": "http://storage.blob.core.windows.net/vmcontainerce1a1b75-f480-47cb-8e6e-55142e4a5f68/datadisk-0.ce1a1b75-f480-47cb-8e6e-55142e4a5f68.vhd"
},
"caching": "ReadWrite"
},
{
"lun": 1,
"name": "packer-datadisk-1.32118633-6dc9-449f-83b6-a7d2983bec14.vhd",
"createOption": "Empty",
"image": {
"uri": "http://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-datadisk-1.32118633-6dc9-449f-83b6-a7d2983bec14.vhd"
},
"vhd": {
"uri": "http://storage.blob.core.windows.net/vmcontainerce1a1b75-f480-47cb-8e6e-55142e4a5f68/datadisk-1.ce1a1b75-f480-47cb-8e6e-55142e4a5f68.vhd"
},
"caching": "ReadWrite"
}
]
},
"osProfile": {
"computerName": "[parameters('vmName')]",
"adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPassword')]"
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[parameters('networkInterfaceId')]"
}
]
},
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": false
}
},
"provisioningState": 0
},
"name": "[parameters('vmName')]",
"type": "Microsoft.Compute/virtualMachines",
"location": "southcentralus"
}
]
}
}
}`
var captureTemplate02 = `{
"operationId": "ac1c7c38-a591-41b3-89bd-ea39fceace1b",
"status": "Succeeded",
"startTime": "2016-04-04T21:07:25.2900874+00:00",
"endTime": "2016-04-04T21:07:26.4776321+00:00"
}`
func TestCaptureParseJson(t *testing.T) {
var operation CaptureOperation
err := json.Unmarshal([]byte(captureTemplate01), &operation)
if err != nil {
t.Fatalf("failed to the sample capture operation: %s", err)
}
testSubject := operation.Properties.Output
if testSubject.Schema != "http://schema.management.azure.com/schemas/2014-04-01-preview/VM_IP.json" {
t.Errorf("Schema's value was unexpected: %s", testSubject.Schema)
}
if testSubject.ContentVersion != "1.0.0.0" {
t.Errorf("ContentVersion's value was unexpected: %s", testSubject.ContentVersion)
}
// == Parameters ====================================
if len(testSubject.Parameters) != 5 {
t.Fatalf("expected parameters to have 5 keys, but got %d", len(testSubject.Parameters))
}
if _, ok := testSubject.Parameters["vmName"]; !ok {
t.Errorf("Parameters['vmName'] was an expected parameters, but it did not exist")
}
if testSubject.Parameters["vmName"].Type != "string" {
t.Errorf("Parameters['vmName'].Type == 'string', but got '%s'", testSubject.Parameters["vmName"].Type)
}
if _, ok := testSubject.Parameters["vmSize"]; !ok {
t.Errorf("Parameters['vmSize'] was an expected parameters, but it did not exist")
}
if testSubject.Parameters["vmSize"].Type != "string" {
t.Errorf("Parameters['vmSize'].Type == 'string', but got '%s'", testSubject.Parameters["vmSize"])
}
if testSubject.Parameters["vmSize"].DefaultValue != "Standard_A2" {
t.Errorf("Parameters['vmSize'].DefaultValue == 'string', but got '%s'", testSubject.Parameters["vmSize"].DefaultValue)
}
// == Resources =====================================
if len(testSubject.Resources) != 1 {
t.Fatalf("expected resources to have length 1, but got %d", len(testSubject.Resources))
}
if testSubject.Resources[0].Name != "[parameters('vmName')]" {
t.Errorf("Resources[0].Name's value was unexpected: %s", testSubject.Resources[0].Name)
}
if testSubject.Resources[0].Type != "Microsoft.Compute/virtualMachines" {
t.Errorf("Resources[0].Type's value was unexpected: %s", testSubject.Resources[0].Type)
}
if testSubject.Resources[0].Location != "southcentralus" {
t.Errorf("Resources[0].Location's value was unexpected: %s", testSubject.Resources[0].Location)
}
// == Resources/Properties =====================================
if testSubject.Resources[0].Properties.ProvisioningState != 0 {
t.Errorf("Resources[0].Properties.ProvisioningState's value was unexpected: %d", testSubject.Resources[0].Properties.ProvisioningState)
}
// == Resources/Properties/HardwareProfile ======================
hardwareProfile := testSubject.Resources[0].Properties.HardwareProfile
if hardwareProfile.VMSize != "[parameters('vmSize')]" {
t.Errorf("Resources[0].Properties.HardwareProfile.VMSize's value was unexpected: %s", hardwareProfile.VMSize)
}
// == Resources/Properties/StorageProfile/OSDisk ================
osDisk := testSubject.Resources[0].Properties.StorageProfile.OSDisk
if osDisk.OSType != "Linux" {
t.Errorf("Resources[0].Properties.StorageProfile.OSDisk.OSDisk's value was unexpected: %s", osDisk.OSType)
}
if osDisk.Name != "packer-osDisk.32118633-6dc9-449f-83b6-a7d2983bec14.vhd" {
t.Errorf("Resources[0].Properties.StorageProfile.OSDisk.Name's value was unexpected: %s", osDisk.Name)
}
if osDisk.CreateOption != "FromImage" {
t.Errorf("Resources[0].Properties.StorageProfile.OSDisk.CreateOption's value was unexpected: %s", osDisk.CreateOption)
}
if osDisk.Image.Uri != "http://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-osDisk.32118633-6dc9-449f-83b6-a7d2983bec14.vhd" {
t.Errorf("Resources[0].Properties.StorageProfile.OSDisk.Image.Uri's value was unexpected: %s", osDisk.Image.Uri)
}
if osDisk.Vhd.Uri != "http://storage.blob.core.windows.net/vmcontainerce1a1b75-f480-47cb-8e6e-55142e4a5f68/osDisk.ce1a1b75-f480-47cb-8e6e-55142e4a5f68.vhd" {
t.Errorf("Resources[0].Properties.StorageProfile.OSDisk.Vhd.Uri's value was unexpected: %s", osDisk.Vhd.Uri)
}
if osDisk.Caching != "ReadWrite" {
t.Errorf("Resources[0].Properties.StorageProfile.OSDisk.Caching's value was unexpected: %s", osDisk.Caching)
}
// == Resources/Properties/StorageProfile/DataDisks ================
dataDisks := testSubject.Resources[0].Properties.StorageProfile.DataDisks
if len(dataDisks) != 2 {
t.Errorf("Resources[0].Properties.StorageProfile.DataDisks, 2 disks expected but was: %d", len(dataDisks))
}
if dataDisks[0].Name != "packer-datadisk-0.32118633-6dc9-449f-83b6-a7d2983bec14.vhd" {
t.Errorf("Resources[0].Properties.StorageProfile.dataDisks[0].Name's value was unexpected: %s", dataDisks[0].Name)
}
if dataDisks[0].CreateOption != "Empty" {
t.Errorf("Resources[0].Properties.StorageProfile.dataDisks[0].CreateOption's value was unexpected: %s", dataDisks[0].CreateOption)
}
if dataDisks[0].Image.Uri != "http://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-datadisk-0.32118633-6dc9-449f-83b6-a7d2983bec14.vhd" {
t.Errorf("Resources[0].Properties.StorageProfile.dataDisks[0].Image.Uri's value was unexpected: %s", dataDisks[0].Image.Uri)
}
if dataDisks[0].Vhd.Uri != "http://storage.blob.core.windows.net/vmcontainerce1a1b75-f480-47cb-8e6e-55142e4a5f68/datadisk-0.ce1a1b75-f480-47cb-8e6e-55142e4a5f68.vhd" {
t.Errorf("Resources[0].Properties.StorageProfile.dataDisks[0].Vhd.Uri's value was unexpected: %s", dataDisks[0].Vhd.Uri)
}
if dataDisks[0].Caching != "ReadWrite" {
t.Errorf("Resources[0].Properties.StorageProfile.dataDisks[0].Caching's value was unexpected: %s", dataDisks[0].Caching)
}
if dataDisks[1].Name != "packer-datadisk-1.32118633-6dc9-449f-83b6-a7d2983bec14.vhd" {
t.Errorf("Resources[0].Properties.StorageProfile.dataDisks[1].Name's value was unexpected: %s", dataDisks[1].Name)
}
if dataDisks[1].CreateOption != "Empty" {
t.Errorf("Resources[0].Properties.StorageProfile.dataDisks[1].CreateOption's value was unexpected: %s", dataDisks[1].CreateOption)
}
if dataDisks[1].Image.Uri != "http://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-datadisk-1.32118633-6dc9-449f-83b6-a7d2983bec14.vhd" {
t.Errorf("Resources[0].Properties.StorageProfile.dataDisks[1].Image.Uri's value was unexpected: %s", dataDisks[1].Image.Uri)
}
if dataDisks[1].Vhd.Uri != "http://storage.blob.core.windows.net/vmcontainerce1a1b75-f480-47cb-8e6e-55142e4a5f68/datadisk-1.ce1a1b75-f480-47cb-8e6e-55142e4a5f68.vhd" {
t.Errorf("Resources[0].Properties.StorageProfile.dataDisks[1].Vhd.Uri's value was unexpected: %s", dataDisks[1].Vhd.Uri)
}
if dataDisks[1].Caching != "ReadWrite" {
t.Errorf("Resources[0].Properties.StorageProfile.dataDisks[1].Caching's value was unexpected: %s", dataDisks[1].Caching)
}
// == Resources/Properties/OSProfile ============================
osProfile := testSubject.Resources[0].Properties.OSProfile
if osProfile.AdminPassword != "[parameters('adminPassword')]" {
t.Errorf("Resources[0].Properties.OSProfile.AdminPassword's value was unexpected: %s", osProfile.AdminPassword)
}
if osProfile.AdminUsername != "[parameters('adminUsername')]" {
t.Errorf("Resources[0].Properties.OSProfile.AdminUsername's value was unexpected: %s", osProfile.AdminUsername)
}
if osProfile.ComputerName != "[parameters('vmName')]" {
t.Errorf("Resources[0].Properties.OSProfile.ComputerName's value was unexpected: %s", osProfile.ComputerName)
}
// == Resources/Properties/NetworkProfile =======================
networkProfile := testSubject.Resources[0].Properties.NetworkProfile
if len(networkProfile.NetworkInterfaces) != 1 {
t.Errorf("Count of Resources[0].Properties.NetworkProfile.NetworkInterfaces was expected to be 1, but go %d", len(networkProfile.NetworkInterfaces))
}
if networkProfile.NetworkInterfaces[0].Id != "[parameters('networkInterfaceId')]" {
t.Errorf("Resources[0].Properties.NetworkProfile.NetworkInterfaces[0].Id's value was unexpected: %s", networkProfile.NetworkInterfaces[0].Id)
}
// == Resources/Properties/DiagnosticsProfile ===================
diagnosticsProfile := testSubject.Resources[0].Properties.DiagnosticsProfile
if diagnosticsProfile.BootDiagnostics.Enabled != false {
t.Errorf("Resources[0].Properties.DiagnosticsProfile.BootDiagnostics.Enabled's value was unexpected: %t", diagnosticsProfile.BootDiagnostics.Enabled)
}
}
func TestCaptureEmptyOperationJson(t *testing.T) {
var operation CaptureOperation
err := json.Unmarshal([]byte(captureTemplate02), &operation)
if err != nil {
t.Fatalf("failed to the sample capture operation: %s", err)
}
if operation.Properties != nil {
t.Errorf("JSON contained no properties, but value was not nil: %+v", operation.Properties)
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,353 +0,0 @@
// Code generated by "mapstructure-to-hcl2 -type Config,SharedImageGallery,SharedImageGalleryDestination,PlanInformation"; DO NOT EDIT.
package arm
import (
"github.com/hashicorp/hcl/v2/hcldec"
"github.com/hashicorp/packer-plugin-sdk/template/config"
"github.com/zclconf/go-cty/cty"
)
// FlatConfig is an auto-generated flat version of Config.
// Where the contents of a field with a `mapstructure:,squash` tag are bubbled up.
type FlatConfig struct {
PackerBuildName *string `mapstructure:"packer_build_name" cty:"packer_build_name" hcl:"packer_build_name"`
PackerBuilderType *string `mapstructure:"packer_builder_type" cty:"packer_builder_type" hcl:"packer_builder_type"`
PackerCoreVersion *string `mapstructure:"packer_core_version" cty:"packer_core_version" hcl:"packer_core_version"`
PackerDebug *bool `mapstructure:"packer_debug" cty:"packer_debug" hcl:"packer_debug"`
PackerForce *bool `mapstructure:"packer_force" cty:"packer_force" hcl:"packer_force"`
PackerOnError *string `mapstructure:"packer_on_error" cty:"packer_on_error" hcl:"packer_on_error"`
PackerUserVars map[string]string `mapstructure:"packer_user_variables" cty:"packer_user_variables" hcl:"packer_user_variables"`
PackerSensitiveVars []string `mapstructure:"packer_sensitive_variables" cty:"packer_sensitive_variables" hcl:"packer_sensitive_variables"`
CloudEnvironmentName *string `mapstructure:"cloud_environment_name" required:"false" cty:"cloud_environment_name" hcl:"cloud_environment_name"`
ClientID *string `mapstructure:"client_id" cty:"client_id" hcl:"client_id"`
ClientSecret *string `mapstructure:"client_secret" cty:"client_secret" hcl:"client_secret"`
ClientCertPath *string `mapstructure:"client_cert_path" cty:"client_cert_path" hcl:"client_cert_path"`
ClientCertExpireTimeout *string `mapstructure:"client_cert_token_timeout" required:"false" cty:"client_cert_token_timeout" hcl:"client_cert_token_timeout"`
ClientJWT *string `mapstructure:"client_jwt" cty:"client_jwt" hcl:"client_jwt"`
ObjectID *string `mapstructure:"object_id" cty:"object_id" hcl:"object_id"`
TenantID *string `mapstructure:"tenant_id" required:"false" cty:"tenant_id" hcl:"tenant_id"`
SubscriptionID *string `mapstructure:"subscription_id" cty:"subscription_id" hcl:"subscription_id"`
UseAzureCLIAuth *bool `mapstructure:"use_azure_cli_auth" required:"false" cty:"use_azure_cli_auth" hcl:"use_azure_cli_auth"`
UserAssignedManagedIdentities []string `mapstructure:"user_assigned_managed_identities" required:"false" cty:"user_assigned_managed_identities" hcl:"user_assigned_managed_identities"`
CaptureNamePrefix *string `mapstructure:"capture_name_prefix" cty:"capture_name_prefix" hcl:"capture_name_prefix"`
CaptureContainerName *string `mapstructure:"capture_container_name" cty:"capture_container_name" hcl:"capture_container_name"`
SharedGallery *FlatSharedImageGallery `mapstructure:"shared_image_gallery" required:"false" cty:"shared_image_gallery" hcl:"shared_image_gallery"`
SharedGalleryDestination *FlatSharedImageGalleryDestination `mapstructure:"shared_image_gallery_destination" cty:"shared_image_gallery_destination" hcl:"shared_image_gallery_destination"`
SharedGalleryTimeout *string `mapstructure:"shared_image_gallery_timeout" cty:"shared_image_gallery_timeout" hcl:"shared_image_gallery_timeout"`
SharedGalleryImageVersionEndOfLifeDate *string `mapstructure:"shared_gallery_image_version_end_of_life_date" required:"false" cty:"shared_gallery_image_version_end_of_life_date" hcl:"shared_gallery_image_version_end_of_life_date"`
SharedGalleryImageVersionReplicaCount *int32 `mapstructure:"shared_image_gallery_replica_count" required:"false" cty:"shared_image_gallery_replica_count" hcl:"shared_image_gallery_replica_count"`
SharedGalleryImageVersionExcludeFromLatest *bool `mapstructure:"shared_gallery_image_version_exclude_from_latest" required:"false" cty:"shared_gallery_image_version_exclude_from_latest" hcl:"shared_gallery_image_version_exclude_from_latest"`
ImagePublisher *string `mapstructure:"image_publisher" required:"true" cty:"image_publisher" hcl:"image_publisher"`
ImageOffer *string `mapstructure:"image_offer" required:"true" cty:"image_offer" hcl:"image_offer"`
ImageSku *string `mapstructure:"image_sku" required:"true" cty:"image_sku" hcl:"image_sku"`
ImageVersion *string `mapstructure:"image_version" required:"false" cty:"image_version" hcl:"image_version"`
ImageUrl *string `mapstructure:"image_url" required:"true" cty:"image_url" hcl:"image_url"`
CustomManagedImageName *string `mapstructure:"custom_managed_image_name" required:"true" cty:"custom_managed_image_name" hcl:"custom_managed_image_name"`
CustomManagedImageResourceGroupName *string `mapstructure:"custom_managed_image_resource_group_name" required:"true" cty:"custom_managed_image_resource_group_name" hcl:"custom_managed_image_resource_group_name"`
Location *string `mapstructure:"location" cty:"location" hcl:"location"`
VMSize *string `mapstructure:"vm_size" required:"false" cty:"vm_size" hcl:"vm_size"`
ManagedImageResourceGroupName *string `mapstructure:"managed_image_resource_group_name" cty:"managed_image_resource_group_name" hcl:"managed_image_resource_group_name"`
ManagedImageName *string `mapstructure:"managed_image_name" cty:"managed_image_name" hcl:"managed_image_name"`
ManagedImageStorageAccountType *string `mapstructure:"managed_image_storage_account_type" required:"false" cty:"managed_image_storage_account_type" hcl:"managed_image_storage_account_type"`
ManagedImageOSDiskSnapshotName *string `mapstructure:"managed_image_os_disk_snapshot_name" required:"false" cty:"managed_image_os_disk_snapshot_name" hcl:"managed_image_os_disk_snapshot_name"`
ManagedImageDataDiskSnapshotPrefix *string `mapstructure:"managed_image_data_disk_snapshot_prefix" required:"false" cty:"managed_image_data_disk_snapshot_prefix" hcl:"managed_image_data_disk_snapshot_prefix"`
ManagedImageZoneResilient *bool `mapstructure:"managed_image_zone_resilient" required:"false" cty:"managed_image_zone_resilient" hcl:"managed_image_zone_resilient"`
AzureTags map[string]string `mapstructure:"azure_tags" required:"false" cty:"azure_tags" hcl:"azure_tags"`
AzureTag []config.FlatNameValue `mapstructure:"azure_tag" required:"false" cty:"azure_tag" hcl:"azure_tag"`
ResourceGroupName *string `mapstructure:"resource_group_name" cty:"resource_group_name" hcl:"resource_group_name"`
StorageAccount *string `mapstructure:"storage_account" cty:"storage_account" hcl:"storage_account"`
TempComputeName *string `mapstructure:"temp_compute_name" required:"false" cty:"temp_compute_name" hcl:"temp_compute_name"`
TempResourceGroupName *string `mapstructure:"temp_resource_group_name" cty:"temp_resource_group_name" hcl:"temp_resource_group_name"`
BuildResourceGroupName *string `mapstructure:"build_resource_group_name" cty:"build_resource_group_name" hcl:"build_resource_group_name"`
BuildKeyVaultName *string `mapstructure:"build_key_vault_name" cty:"build_key_vault_name" hcl:"build_key_vault_name"`
BuildKeyVaultSKU *string `mapstructure:"build_key_vault_sku" cty:"build_key_vault_sku" hcl:"build_key_vault_sku"`
PrivateVirtualNetworkWithPublicIp *bool `mapstructure:"private_virtual_network_with_public_ip" required:"false" cty:"private_virtual_network_with_public_ip" hcl:"private_virtual_network_with_public_ip"`
VirtualNetworkName *string `mapstructure:"virtual_network_name" required:"false" cty:"virtual_network_name" hcl:"virtual_network_name"`
VirtualNetworkSubnetName *string `mapstructure:"virtual_network_subnet_name" required:"false" cty:"virtual_network_subnet_name" hcl:"virtual_network_subnet_name"`
VirtualNetworkResourceGroupName *string `mapstructure:"virtual_network_resource_group_name" required:"false" cty:"virtual_network_resource_group_name" hcl:"virtual_network_resource_group_name"`
CustomDataFile *string `mapstructure:"custom_data_file" required:"false" cty:"custom_data_file" hcl:"custom_data_file"`
PlanInfo *FlatPlanInformation `mapstructure:"plan_info" required:"false" cty:"plan_info" hcl:"plan_info"`
PollingDurationTimeout *string `mapstructure:"polling_duration_timeout" required:"false" cty:"polling_duration_timeout" hcl:"polling_duration_timeout"`
OSType *string `mapstructure:"os_type" required:"false" cty:"os_type" hcl:"os_type"`
OSDiskSizeGB *int32 `mapstructure:"os_disk_size_gb" required:"false" cty:"os_disk_size_gb" hcl:"os_disk_size_gb"`
AdditionalDiskSize []int32 `mapstructure:"disk_additional_size" required:"false" cty:"disk_additional_size" hcl:"disk_additional_size"`
DiskCachingType *string `mapstructure:"disk_caching_type" required:"false" cty:"disk_caching_type" hcl:"disk_caching_type"`
AllowedInboundIpAddresses []string `mapstructure:"allowed_inbound_ip_addresses" cty:"allowed_inbound_ip_addresses" hcl:"allowed_inbound_ip_addresses"`
BootDiagSTGAccount *string `mapstructure:"boot_diag_storage_account" required:"false" cty:"boot_diag_storage_account" hcl:"boot_diag_storage_account"`
CustomResourcePrefix *string `mapstructure:"custom_resource_build_prefix" required:"false" cty:"custom_resource_build_prefix" hcl:"custom_resource_build_prefix"`
Type *string `mapstructure:"communicator" cty:"communicator" hcl:"communicator"`
PauseBeforeConnect *string `mapstructure:"pause_before_connecting" cty:"pause_before_connecting" hcl:"pause_before_connecting"`
SSHHost *string `mapstructure:"ssh_host" cty:"ssh_host" hcl:"ssh_host"`
SSHPort *int `mapstructure:"ssh_port" cty:"ssh_port" hcl:"ssh_port"`
SSHUsername *string `mapstructure:"ssh_username" cty:"ssh_username" hcl:"ssh_username"`
SSHPassword *string `mapstructure:"ssh_password" cty:"ssh_password" hcl:"ssh_password"`
SSHKeyPairName *string `mapstructure:"ssh_keypair_name" undocumented:"true" cty:"ssh_keypair_name" hcl:"ssh_keypair_name"`
SSHTemporaryKeyPairName *string `mapstructure:"temporary_key_pair_name" undocumented:"true" cty:"temporary_key_pair_name" hcl:"temporary_key_pair_name"`
SSHTemporaryKeyPairType *string `mapstructure:"temporary_key_pair_type" cty:"temporary_key_pair_type" hcl:"temporary_key_pair_type"`
SSHTemporaryKeyPairBits *int `mapstructure:"temporary_key_pair_bits" cty:"temporary_key_pair_bits" hcl:"temporary_key_pair_bits"`
SSHCiphers []string `mapstructure:"ssh_ciphers" cty:"ssh_ciphers" hcl:"ssh_ciphers"`
SSHClearAuthorizedKeys *bool `mapstructure:"ssh_clear_authorized_keys" cty:"ssh_clear_authorized_keys" hcl:"ssh_clear_authorized_keys"`
SSHKEXAlgos []string `mapstructure:"ssh_key_exchange_algorithms" cty:"ssh_key_exchange_algorithms" hcl:"ssh_key_exchange_algorithms"`
SSHPrivateKeyFile *string `mapstructure:"ssh_private_key_file" undocumented:"true" cty:"ssh_private_key_file" hcl:"ssh_private_key_file"`
SSHCertificateFile *string `mapstructure:"ssh_certificate_file" cty:"ssh_certificate_file" hcl:"ssh_certificate_file"`
SSHPty *bool `mapstructure:"ssh_pty" cty:"ssh_pty" hcl:"ssh_pty"`
SSHTimeout *string `mapstructure:"ssh_timeout" cty:"ssh_timeout" hcl:"ssh_timeout"`
SSHWaitTimeout *string `mapstructure:"ssh_wait_timeout" undocumented:"true" cty:"ssh_wait_timeout" hcl:"ssh_wait_timeout"`
SSHAgentAuth *bool `mapstructure:"ssh_agent_auth" undocumented:"true" cty:"ssh_agent_auth" hcl:"ssh_agent_auth"`
SSHDisableAgentForwarding *bool `mapstructure:"ssh_disable_agent_forwarding" cty:"ssh_disable_agent_forwarding" hcl:"ssh_disable_agent_forwarding"`
SSHHandshakeAttempts *int `mapstructure:"ssh_handshake_attempts" cty:"ssh_handshake_attempts" hcl:"ssh_handshake_attempts"`
SSHBastionHost *string `mapstructure:"ssh_bastion_host" cty:"ssh_bastion_host" hcl:"ssh_bastion_host"`
SSHBastionPort *int `mapstructure:"ssh_bastion_port" cty:"ssh_bastion_port" hcl:"ssh_bastion_port"`
SSHBastionAgentAuth *bool `mapstructure:"ssh_bastion_agent_auth" cty:"ssh_bastion_agent_auth" hcl:"ssh_bastion_agent_auth"`
SSHBastionUsername *string `mapstructure:"ssh_bastion_username" cty:"ssh_bastion_username" hcl:"ssh_bastion_username"`
SSHBastionPassword *string `mapstructure:"ssh_bastion_password" cty:"ssh_bastion_password" hcl:"ssh_bastion_password"`
SSHBastionInteractive *bool `mapstructure:"ssh_bastion_interactive" cty:"ssh_bastion_interactive" hcl:"ssh_bastion_interactive"`
SSHBastionPrivateKeyFile *string `mapstructure:"ssh_bastion_private_key_file" cty:"ssh_bastion_private_key_file" hcl:"ssh_bastion_private_key_file"`
SSHBastionCertificateFile *string `mapstructure:"ssh_bastion_certificate_file" cty:"ssh_bastion_certificate_file" hcl:"ssh_bastion_certificate_file"`
SSHFileTransferMethod *string `mapstructure:"ssh_file_transfer_method" cty:"ssh_file_transfer_method" hcl:"ssh_file_transfer_method"`
SSHProxyHost *string `mapstructure:"ssh_proxy_host" cty:"ssh_proxy_host" hcl:"ssh_proxy_host"`
SSHProxyPort *int `mapstructure:"ssh_proxy_port" cty:"ssh_proxy_port" hcl:"ssh_proxy_port"`
SSHProxyUsername *string `mapstructure:"ssh_proxy_username" cty:"ssh_proxy_username" hcl:"ssh_proxy_username"`
SSHProxyPassword *string `mapstructure:"ssh_proxy_password" cty:"ssh_proxy_password" hcl:"ssh_proxy_password"`
SSHKeepAliveInterval *string `mapstructure:"ssh_keep_alive_interval" cty:"ssh_keep_alive_interval" hcl:"ssh_keep_alive_interval"`
SSHReadWriteTimeout *string `mapstructure:"ssh_read_write_timeout" cty:"ssh_read_write_timeout" hcl:"ssh_read_write_timeout"`
SSHRemoteTunnels []string `mapstructure:"ssh_remote_tunnels" cty:"ssh_remote_tunnels" hcl:"ssh_remote_tunnels"`
SSHLocalTunnels []string `mapstructure:"ssh_local_tunnels" cty:"ssh_local_tunnels" hcl:"ssh_local_tunnels"`
SSHPublicKey []byte `mapstructure:"ssh_public_key" undocumented:"true" cty:"ssh_public_key" hcl:"ssh_public_key"`
SSHPrivateKey []byte `mapstructure:"ssh_private_key" undocumented:"true" cty:"ssh_private_key" hcl:"ssh_private_key"`
WinRMUser *string `mapstructure:"winrm_username" cty:"winrm_username" hcl:"winrm_username"`
WinRMPassword *string `mapstructure:"winrm_password" cty:"winrm_password" hcl:"winrm_password"`
WinRMHost *string `mapstructure:"winrm_host" cty:"winrm_host" hcl:"winrm_host"`
WinRMNoProxy *bool `mapstructure:"winrm_no_proxy" cty:"winrm_no_proxy" hcl:"winrm_no_proxy"`
WinRMPort *int `mapstructure:"winrm_port" cty:"winrm_port" hcl:"winrm_port"`
WinRMTimeout *string `mapstructure:"winrm_timeout" cty:"winrm_timeout" hcl:"winrm_timeout"`
WinRMUseSSL *bool `mapstructure:"winrm_use_ssl" cty:"winrm_use_ssl" hcl:"winrm_use_ssl"`
WinRMInsecure *bool `mapstructure:"winrm_insecure" cty:"winrm_insecure" hcl:"winrm_insecure"`
WinRMUseNTLM *bool `mapstructure:"winrm_use_ntlm" cty:"winrm_use_ntlm" hcl:"winrm_use_ntlm"`
AsyncResourceGroupDelete *bool `mapstructure:"async_resourcegroup_delete" required:"false" cty:"async_resourcegroup_delete" hcl:"async_resourcegroup_delete"`
}
// FlatMapstructure returns a new FlatConfig.
// FlatConfig is an auto-generated flat version of Config.
// Where the contents a fields with a `mapstructure:,squash` tag are bubbled up.
func (*Config) FlatMapstructure() interface{ HCL2Spec() map[string]hcldec.Spec } {
return new(FlatConfig)
}
// HCL2Spec returns the hcl spec of a Config.
// This spec is used by HCL to read the fields of Config.
// The decoded values from this spec will then be applied to a FlatConfig.
func (*FlatConfig) HCL2Spec() map[string]hcldec.Spec {
s := map[string]hcldec.Spec{
"packer_build_name": &hcldec.AttrSpec{Name: "packer_build_name", Type: cty.String, Required: false},
"packer_builder_type": &hcldec.AttrSpec{Name: "packer_builder_type", Type: cty.String, Required: false},
"packer_core_version": &hcldec.AttrSpec{Name: "packer_core_version", Type: cty.String, Required: false},
"packer_debug": &hcldec.AttrSpec{Name: "packer_debug", Type: cty.Bool, Required: false},
"packer_force": &hcldec.AttrSpec{Name: "packer_force", Type: cty.Bool, Required: false},
"packer_on_error": &hcldec.AttrSpec{Name: "packer_on_error", Type: cty.String, Required: false},
"packer_user_variables": &hcldec.AttrSpec{Name: "packer_user_variables", Type: cty.Map(cty.String), Required: false},
"packer_sensitive_variables": &hcldec.AttrSpec{Name: "packer_sensitive_variables", Type: cty.List(cty.String), Required: false},
"cloud_environment_name": &hcldec.AttrSpec{Name: "cloud_environment_name", Type: cty.String, Required: false},
"client_id": &hcldec.AttrSpec{Name: "client_id", Type: cty.String, Required: false},
"client_secret": &hcldec.AttrSpec{Name: "client_secret", Type: cty.String, Required: false},
"client_cert_path": &hcldec.AttrSpec{Name: "client_cert_path", Type: cty.String, Required: false},
"client_cert_token_timeout": &hcldec.AttrSpec{Name: "client_cert_token_timeout", Type: cty.String, Required: false},
"client_jwt": &hcldec.AttrSpec{Name: "client_jwt", Type: cty.String, Required: false},
"object_id": &hcldec.AttrSpec{Name: "object_id", Type: cty.String, Required: false},
"tenant_id": &hcldec.AttrSpec{Name: "tenant_id", Type: cty.String, Required: false},
"subscription_id": &hcldec.AttrSpec{Name: "subscription_id", Type: cty.String, Required: false},
"use_azure_cli_auth": &hcldec.AttrSpec{Name: "use_azure_cli_auth", Type: cty.Bool, Required: false},
"user_assigned_managed_identities": &hcldec.AttrSpec{Name: "user_assigned_managed_identities", Type: cty.List(cty.String), Required: false},
"capture_name_prefix": &hcldec.AttrSpec{Name: "capture_name_prefix", Type: cty.String, Required: false},
"capture_container_name": &hcldec.AttrSpec{Name: "capture_container_name", Type: cty.String, Required: false},
"shared_image_gallery": &hcldec.BlockSpec{TypeName: "shared_image_gallery", Nested: hcldec.ObjectSpec((*FlatSharedImageGallery)(nil).HCL2Spec())},
"shared_image_gallery_destination": &hcldec.BlockSpec{TypeName: "shared_image_gallery_destination", Nested: hcldec.ObjectSpec((*FlatSharedImageGalleryDestination)(nil).HCL2Spec())},
"shared_image_gallery_timeout": &hcldec.AttrSpec{Name: "shared_image_gallery_timeout", Type: cty.String, Required: false},
"shared_gallery_image_version_end_of_life_date": &hcldec.AttrSpec{Name: "shared_gallery_image_version_end_of_life_date", Type: cty.String, Required: false},
"shared_image_gallery_replica_count": &hcldec.AttrSpec{Name: "shared_image_gallery_replica_count", Type: cty.Number, Required: false},
"shared_gallery_image_version_exclude_from_latest": &hcldec.AttrSpec{Name: "shared_gallery_image_version_exclude_from_latest", Type: cty.Bool, Required: false},
"image_publisher": &hcldec.AttrSpec{Name: "image_publisher", Type: cty.String, Required: false},
"image_offer": &hcldec.AttrSpec{Name: "image_offer", Type: cty.String, Required: false},
"image_sku": &hcldec.AttrSpec{Name: "image_sku", Type: cty.String, Required: false},
"image_version": &hcldec.AttrSpec{Name: "image_version", Type: cty.String, Required: false},
"image_url": &hcldec.AttrSpec{Name: "image_url", Type: cty.String, Required: false},
"custom_managed_image_name": &hcldec.AttrSpec{Name: "custom_managed_image_name", Type: cty.String, Required: false},
"custom_managed_image_resource_group_name": &hcldec.AttrSpec{Name: "custom_managed_image_resource_group_name", Type: cty.String, Required: false},
"location": &hcldec.AttrSpec{Name: "location", Type: cty.String, Required: false},
"vm_size": &hcldec.AttrSpec{Name: "vm_size", Type: cty.String, Required: false},
"managed_image_resource_group_name": &hcldec.AttrSpec{Name: "managed_image_resource_group_name", Type: cty.String, Required: false},
"managed_image_name": &hcldec.AttrSpec{Name: "managed_image_name", Type: cty.String, Required: false},
"managed_image_storage_account_type": &hcldec.AttrSpec{Name: "managed_image_storage_account_type", Type: cty.String, Required: false},
"managed_image_os_disk_snapshot_name": &hcldec.AttrSpec{Name: "managed_image_os_disk_snapshot_name", Type: cty.String, Required: false},
"managed_image_data_disk_snapshot_prefix": &hcldec.AttrSpec{Name: "managed_image_data_disk_snapshot_prefix", Type: cty.String, Required: false},
"managed_image_zone_resilient": &hcldec.AttrSpec{Name: "managed_image_zone_resilient", Type: cty.Bool, Required: false},
"azure_tags": &hcldec.AttrSpec{Name: "azure_tags", Type: cty.Map(cty.String), Required: false},
"azure_tag": &hcldec.BlockListSpec{TypeName: "azure_tag", Nested: hcldec.ObjectSpec((*config.FlatNameValue)(nil).HCL2Spec())},
"resource_group_name": &hcldec.AttrSpec{Name: "resource_group_name", Type: cty.String, Required: false},
"storage_account": &hcldec.AttrSpec{Name: "storage_account", Type: cty.String, Required: false},
"temp_compute_name": &hcldec.AttrSpec{Name: "temp_compute_name", Type: cty.String, Required: false},
"temp_resource_group_name": &hcldec.AttrSpec{Name: "temp_resource_group_name", Type: cty.String, Required: false},
"build_resource_group_name": &hcldec.AttrSpec{Name: "build_resource_group_name", Type: cty.String, Required: false},
"build_key_vault_name": &hcldec.AttrSpec{Name: "build_key_vault_name", Type: cty.String, Required: false},
"build_key_vault_sku": &hcldec.AttrSpec{Name: "build_key_vault_sku", Type: cty.String, Required: false},
"private_virtual_network_with_public_ip": &hcldec.AttrSpec{Name: "private_virtual_network_with_public_ip", Type: cty.Bool, Required: false},
"virtual_network_name": &hcldec.AttrSpec{Name: "virtual_network_name", Type: cty.String, Required: false},
"virtual_network_subnet_name": &hcldec.AttrSpec{Name: "virtual_network_subnet_name", Type: cty.String, Required: false},
"virtual_network_resource_group_name": &hcldec.AttrSpec{Name: "virtual_network_resource_group_name", Type: cty.String, Required: false},
"custom_data_file": &hcldec.AttrSpec{Name: "custom_data_file", Type: cty.String, Required: false},
"plan_info": &hcldec.BlockSpec{TypeName: "plan_info", Nested: hcldec.ObjectSpec((*FlatPlanInformation)(nil).HCL2Spec())},
"polling_duration_timeout": &hcldec.AttrSpec{Name: "polling_duration_timeout", Type: cty.String, Required: false},
"os_type": &hcldec.AttrSpec{Name: "os_type", Type: cty.String, Required: false},
"os_disk_size_gb": &hcldec.AttrSpec{Name: "os_disk_size_gb", Type: cty.Number, Required: false},
"disk_additional_size": &hcldec.AttrSpec{Name: "disk_additional_size", Type: cty.List(cty.Number), Required: false},
"disk_caching_type": &hcldec.AttrSpec{Name: "disk_caching_type", Type: cty.String, Required: false},
"allowed_inbound_ip_addresses": &hcldec.AttrSpec{Name: "allowed_inbound_ip_addresses", Type: cty.List(cty.String), Required: false},
"boot_diag_storage_account": &hcldec.AttrSpec{Name: "boot_diag_storage_account", Type: cty.String, Required: false},
"custom_resource_build_prefix": &hcldec.AttrSpec{Name: "custom_resource_build_prefix", Type: cty.String, Required: false},
"communicator": &hcldec.AttrSpec{Name: "communicator", Type: cty.String, Required: false},
"pause_before_connecting": &hcldec.AttrSpec{Name: "pause_before_connecting", Type: cty.String, Required: false},
"ssh_host": &hcldec.AttrSpec{Name: "ssh_host", Type: cty.String, Required: false},
"ssh_port": &hcldec.AttrSpec{Name: "ssh_port", Type: cty.Number, Required: false},
"ssh_username": &hcldec.AttrSpec{Name: "ssh_username", Type: cty.String, Required: false},
"ssh_password": &hcldec.AttrSpec{Name: "ssh_password", Type: cty.String, Required: false},
"ssh_keypair_name": &hcldec.AttrSpec{Name: "ssh_keypair_name", Type: cty.String, Required: false},
"temporary_key_pair_name": &hcldec.AttrSpec{Name: "temporary_key_pair_name", Type: cty.String, Required: false},
"temporary_key_pair_type": &hcldec.AttrSpec{Name: "temporary_key_pair_type", Type: cty.String, Required: false},
"temporary_key_pair_bits": &hcldec.AttrSpec{Name: "temporary_key_pair_bits", Type: cty.Number, Required: false},
"ssh_ciphers": &hcldec.AttrSpec{Name: "ssh_ciphers", Type: cty.List(cty.String), Required: false},
"ssh_clear_authorized_keys": &hcldec.AttrSpec{Name: "ssh_clear_authorized_keys", Type: cty.Bool, Required: false},
"ssh_key_exchange_algorithms": &hcldec.AttrSpec{Name: "ssh_key_exchange_algorithms", Type: cty.List(cty.String), Required: false},
"ssh_private_key_file": &hcldec.AttrSpec{Name: "ssh_private_key_file", Type: cty.String, Required: false},
"ssh_certificate_file": &hcldec.AttrSpec{Name: "ssh_certificate_file", Type: cty.String, Required: false},
"ssh_pty": &hcldec.AttrSpec{Name: "ssh_pty", Type: cty.Bool, Required: false},
"ssh_timeout": &hcldec.AttrSpec{Name: "ssh_timeout", Type: cty.String, Required: false},
"ssh_wait_timeout": &hcldec.AttrSpec{Name: "ssh_wait_timeout", Type: cty.String, Required: false},
"ssh_agent_auth": &hcldec.AttrSpec{Name: "ssh_agent_auth", Type: cty.Bool, Required: false},
"ssh_disable_agent_forwarding": &hcldec.AttrSpec{Name: "ssh_disable_agent_forwarding", Type: cty.Bool, Required: false},
"ssh_handshake_attempts": &hcldec.AttrSpec{Name: "ssh_handshake_attempts", Type: cty.Number, Required: false},
"ssh_bastion_host": &hcldec.AttrSpec{Name: "ssh_bastion_host", Type: cty.String, Required: false},
"ssh_bastion_port": &hcldec.AttrSpec{Name: "ssh_bastion_port", Type: cty.Number, Required: false},
"ssh_bastion_agent_auth": &hcldec.AttrSpec{Name: "ssh_bastion_agent_auth", Type: cty.Bool, Required: false},
"ssh_bastion_username": &hcldec.AttrSpec{Name: "ssh_bastion_username", Type: cty.String, Required: false},
"ssh_bastion_password": &hcldec.AttrSpec{Name: "ssh_bastion_password", Type: cty.String, Required: false},
"ssh_bastion_interactive": &hcldec.AttrSpec{Name: "ssh_bastion_interactive", Type: cty.Bool, Required: false},
"ssh_bastion_private_key_file": &hcldec.AttrSpec{Name: "ssh_bastion_private_key_file", Type: cty.String, Required: false},
"ssh_bastion_certificate_file": &hcldec.AttrSpec{Name: "ssh_bastion_certificate_file", Type: cty.String, Required: false},
"ssh_file_transfer_method": &hcldec.AttrSpec{Name: "ssh_file_transfer_method", Type: cty.String, Required: false},
"ssh_proxy_host": &hcldec.AttrSpec{Name: "ssh_proxy_host", Type: cty.String, Required: false},
"ssh_proxy_port": &hcldec.AttrSpec{Name: "ssh_proxy_port", Type: cty.Number, Required: false},
"ssh_proxy_username": &hcldec.AttrSpec{Name: "ssh_proxy_username", Type: cty.String, Required: false},
"ssh_proxy_password": &hcldec.AttrSpec{Name: "ssh_proxy_password", Type: cty.String, Required: false},
"ssh_keep_alive_interval": &hcldec.AttrSpec{Name: "ssh_keep_alive_interval", Type: cty.String, Required: false},
"ssh_read_write_timeout": &hcldec.AttrSpec{Name: "ssh_read_write_timeout", Type: cty.String, Required: false},
"ssh_remote_tunnels": &hcldec.AttrSpec{Name: "ssh_remote_tunnels", Type: cty.List(cty.String), Required: false},
"ssh_local_tunnels": &hcldec.AttrSpec{Name: "ssh_local_tunnels", Type: cty.List(cty.String), Required: false},
"ssh_public_key": &hcldec.AttrSpec{Name: "ssh_public_key", Type: cty.List(cty.Number), Required: false},
"ssh_private_key": &hcldec.AttrSpec{Name: "ssh_private_key", Type: cty.List(cty.Number), Required: false},
"winrm_username": &hcldec.AttrSpec{Name: "winrm_username", Type: cty.String, Required: false},
"winrm_password": &hcldec.AttrSpec{Name: "winrm_password", Type: cty.String, Required: false},
"winrm_host": &hcldec.AttrSpec{Name: "winrm_host", Type: cty.String, Required: false},
"winrm_no_proxy": &hcldec.AttrSpec{Name: "winrm_no_proxy", Type: cty.Bool, Required: false},
"winrm_port": &hcldec.AttrSpec{Name: "winrm_port", Type: cty.Number, Required: false},
"winrm_timeout": &hcldec.AttrSpec{Name: "winrm_timeout", Type: cty.String, Required: false},
"winrm_use_ssl": &hcldec.AttrSpec{Name: "winrm_use_ssl", Type: cty.Bool, Required: false},
"winrm_insecure": &hcldec.AttrSpec{Name: "winrm_insecure", Type: cty.Bool, Required: false},
"winrm_use_ntlm": &hcldec.AttrSpec{Name: "winrm_use_ntlm", Type: cty.Bool, Required: false},
"async_resourcegroup_delete": &hcldec.AttrSpec{Name: "async_resourcegroup_delete", Type: cty.Bool, Required: false},
}
return s
}
// FlatPlanInformation is an auto-generated flat version of PlanInformation.
// Where the contents of a field with a `mapstructure:,squash` tag are bubbled up.
type FlatPlanInformation struct {
PlanName *string `mapstructure:"plan_name" cty:"plan_name" hcl:"plan_name"`
PlanProduct *string `mapstructure:"plan_product" cty:"plan_product" hcl:"plan_product"`
PlanPublisher *string `mapstructure:"plan_publisher" cty:"plan_publisher" hcl:"plan_publisher"`
PlanPromotionCode *string `mapstructure:"plan_promotion_code" cty:"plan_promotion_code" hcl:"plan_promotion_code"`
}
// FlatMapstructure returns a new FlatPlanInformation.
// FlatPlanInformation is an auto-generated flat version of PlanInformation.
// Where the contents a fields with a `mapstructure:,squash` tag are bubbled up.
func (*PlanInformation) FlatMapstructure() interface{ HCL2Spec() map[string]hcldec.Spec } {
return new(FlatPlanInformation)
}
// HCL2Spec returns the hcl spec of a PlanInformation.
// This spec is used by HCL to read the fields of PlanInformation.
// The decoded values from this spec will then be applied to a FlatPlanInformation.
func (*FlatPlanInformation) HCL2Spec() map[string]hcldec.Spec {
s := map[string]hcldec.Spec{
"plan_name": &hcldec.AttrSpec{Name: "plan_name", Type: cty.String, Required: false},
"plan_product": &hcldec.AttrSpec{Name: "plan_product", Type: cty.String, Required: false},
"plan_publisher": &hcldec.AttrSpec{Name: "plan_publisher", Type: cty.String, Required: false},
"plan_promotion_code": &hcldec.AttrSpec{Name: "plan_promotion_code", Type: cty.String, Required: false},
}
return s
}
// FlatSharedImageGallery is an auto-generated flat version of SharedImageGallery.
// Where the contents of a field with a `mapstructure:,squash` tag are bubbled up.
type FlatSharedImageGallery struct {
Subscription *string `mapstructure:"subscription" cty:"subscription" hcl:"subscription"`
ResourceGroup *string `mapstructure:"resource_group" cty:"resource_group" hcl:"resource_group"`
GalleryName *string `mapstructure:"gallery_name" cty:"gallery_name" hcl:"gallery_name"`
ImageName *string `mapstructure:"image_name" cty:"image_name" hcl:"image_name"`
ImageVersion *string `mapstructure:"image_version" required:"false" cty:"image_version" hcl:"image_version"`
}
// FlatMapstructure returns a new FlatSharedImageGallery.
// FlatSharedImageGallery is an auto-generated flat version of SharedImageGallery.
// Where the contents a fields with a `mapstructure:,squash` tag are bubbled up.
func (*SharedImageGallery) FlatMapstructure() interface{ HCL2Spec() map[string]hcldec.Spec } {
return new(FlatSharedImageGallery)
}
// HCL2Spec returns the hcl spec of a SharedImageGallery.
// This spec is used by HCL to read the fields of SharedImageGallery.
// The decoded values from this spec will then be applied to a FlatSharedImageGallery.
func (*FlatSharedImageGallery) HCL2Spec() map[string]hcldec.Spec {
s := map[string]hcldec.Spec{
"subscription": &hcldec.AttrSpec{Name: "subscription", Type: cty.String, Required: false},
"resource_group": &hcldec.AttrSpec{Name: "resource_group", Type: cty.String, Required: false},
"gallery_name": &hcldec.AttrSpec{Name: "gallery_name", Type: cty.String, Required: false},
"image_name": &hcldec.AttrSpec{Name: "image_name", Type: cty.String, Required: false},
"image_version": &hcldec.AttrSpec{Name: "image_version", Type: cty.String, Required: false},
}
return s
}
// FlatSharedImageGalleryDestination is an auto-generated flat version of SharedImageGalleryDestination.
// Where the contents of a field with a `mapstructure:,squash` tag are bubbled up.
type FlatSharedImageGalleryDestination struct {
SigDestinationSubscription *string `mapstructure:"subscription" cty:"subscription" hcl:"subscription"`
SigDestinationResourceGroup *string `mapstructure:"resource_group" cty:"resource_group" hcl:"resource_group"`
SigDestinationGalleryName *string `mapstructure:"gallery_name" cty:"gallery_name" hcl:"gallery_name"`
SigDestinationImageName *string `mapstructure:"image_name" cty:"image_name" hcl:"image_name"`
SigDestinationImageVersion *string `mapstructure:"image_version" cty:"image_version" hcl:"image_version"`
SigDestinationReplicationRegions []string `mapstructure:"replication_regions" cty:"replication_regions" hcl:"replication_regions"`
}
// FlatMapstructure returns a new FlatSharedImageGalleryDestination.
// FlatSharedImageGalleryDestination is an auto-generated flat version of SharedImageGalleryDestination.
// Where the contents a fields with a `mapstructure:,squash` tag are bubbled up.
func (*SharedImageGalleryDestination) FlatMapstructure() interface{ HCL2Spec() map[string]hcldec.Spec } {
return new(FlatSharedImageGalleryDestination)
}
// HCL2Spec returns the hcl spec of a SharedImageGalleryDestination.
// This spec is used by HCL to read the fields of SharedImageGalleryDestination.
// The decoded values from this spec will then be applied to a FlatSharedImageGalleryDestination.
func (*FlatSharedImageGalleryDestination) HCL2Spec() map[string]hcldec.Spec {
s := map[string]hcldec.Spec{
"subscription": &hcldec.AttrSpec{Name: "subscription", Type: cty.String, Required: false},
"resource_group": &hcldec.AttrSpec{Name: "resource_group", Type: cty.String, Required: false},
"gallery_name": &hcldec.AttrSpec{Name: "gallery_name", Type: cty.String, Required: false},
"image_name": &hcldec.AttrSpec{Name: "image_name", Type: cty.String, Required: false},
"image_version": &hcldec.AttrSpec{Name: "image_version", Type: cty.String, Required: false},
"replication_regions": &hcldec.AttrSpec{Name: "replication_regions", Type: cty.List(cty.String), Required: false},
}
return s
}

File diff suppressed because it is too large Load Diff

View File

@ -1,71 +0,0 @@
package arm
import (
"bytes"
"io/ioutil"
"log"
"net/http"
"io"
"github.com/Azure/go-autorest/autorest"
"github.com/Azure/go-autorest/autorest/azure"
"github.com/hashicorp/packer/builder/azure/common/logutil"
)
func chop(data []byte, maxlen int64) string {
s := string(data)
if int64(len(s)) > maxlen {
s = s[:maxlen] + "..."
}
return s
}
func handleBody(body io.ReadCloser, maxlen int64) (io.ReadCloser, string) {
if body == nil {
return nil, ""
}
defer body.Close()
b, err := ioutil.ReadAll(body)
if err != nil {
return nil, ""
}
return ioutil.NopCloser(bytes.NewReader(b)), chop(b, maxlen)
}
func withInspection(maxlen int64) autorest.PrepareDecorator {
return func(p autorest.Preparer) autorest.Preparer {
return autorest.PreparerFunc(func(r *http.Request) (*http.Request, error) {
body, bodyString := handleBody(r.Body, maxlen)
r.Body = body
log.Print("Azure request", logutil.Fields{
"method": r.Method,
"request": r.URL.String(),
"body": bodyString,
})
return p.Prepare(r)
})
}
}
func byInspecting(maxlen int64) autorest.RespondDecorator {
return func(r autorest.Responder) autorest.Responder {
return autorest.ResponderFunc(func(resp *http.Response) error {
body, bodyString := handleBody(resp.Body, maxlen)
resp.Body = body
log.Print("Azure response", logutil.Fields{
"status": resp.Status,
"method": resp.Request.Method,
"request": resp.Request.URL.String(),
"x-ms-request-id": azure.ExtractRequestID(resp),
"body": bodyString,
})
return r.Respond(resp)
})
}
}

View File

@ -1,59 +0,0 @@
package arm
import (
"crypto/rand"
"crypto/rsa"
"crypto/x509"
"encoding/base64"
"encoding/pem"
"fmt"
"time"
"golang.org/x/crypto/ssh"
)
const (
KeySize = 2048
)
type OpenSshKeyPair struct {
privateKey *rsa.PrivateKey
publicKey ssh.PublicKey
}
func NewOpenSshKeyPair() (*OpenSshKeyPair, error) {
return NewOpenSshKeyPairWithSize(KeySize)
}
func NewOpenSshKeyPairWithSize(keySize int) (*OpenSshKeyPair, error) {
privateKey, err := rsa.GenerateKey(rand.Reader, keySize)
if err != nil {
return nil, err
}
publicKey, err := ssh.NewPublicKey(&privateKey.PublicKey)
if err != nil {
return nil, err
}
return &OpenSshKeyPair{
privateKey: privateKey,
publicKey: publicKey,
}, nil
}
func (s *OpenSshKeyPair) AuthorizedKey() string {
return fmt.Sprintf("%s %s packer Azure Deployment%s",
s.publicKey.Type(),
base64.StdEncoding.EncodeToString(s.publicKey.Marshal()),
time.Now().Format(time.RFC3339))
}
func (s *OpenSshKeyPair) PrivateKey() []byte {
privateKey := pem.EncodeToMemory(&pem.Block{
Type: "RSA PRIVATE KEY",
Bytes: x509.MarshalPKCS1PrivateKey(s.privateKey),
})
return privateKey
}

View File

@ -1,37 +0,0 @@
package arm
import (
"testing"
"golang.org/x/crypto/ssh"
)
func TestFart(t *testing.T) {
}
func TestAuthorizedKeyShouldParse(t *testing.T) {
testSubject, err := NewOpenSshKeyPairWithSize(512)
if err != nil {
t.Fatalf("Failed to create a new OpenSSH key pair, err=%s.", err)
}
authorizedKey := testSubject.AuthorizedKey()
_, _, _, _, err = ssh.ParseAuthorizedKey([]byte(authorizedKey))
if err != nil {
t.Fatalf("Failed to parse the authorized key, err=%s", err)
}
}
func TestPrivateKeyShouldParse(t *testing.T) {
testSubject, err := NewOpenSshKeyPairWithSize(512)
if err != nil {
t.Fatalf("Failed to create a new OpenSSH key pair, err=%s.", err)
}
_, err = ssh.ParsePrivateKey([]byte(testSubject.PrivateKey()))
if err != nil {
t.Fatalf("Failed to parse the private key, err=%s\n", err)
}
}

View File

@ -1,141 +0,0 @@
package arm
// Code to resolve resources that are required by the API. These resources
// can most likely be resolved without asking the user, thereby reducing the
// amount of configuration they need to provide.
//
// Resource resolver differs from config retriever because resource resolver
// requires a client to communicate with the Azure API. A config retriever is
// used to determine values without use of a client.
import (
"context"
"fmt"
"strings"
"github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute"
)
type resourceResolver struct {
client *AzureClient
findVirtualNetworkResourceGroup func(*AzureClient, string) (string, error)
findVirtualNetworkSubnet func(*AzureClient, string, string) (string, error)
}
func newResourceResolver(client *AzureClient) *resourceResolver {
return &resourceResolver{
client: client,
findVirtualNetworkResourceGroup: findVirtualNetworkResourceGroup,
findVirtualNetworkSubnet: findVirtualNetworkSubnet,
}
}
func (s *resourceResolver) Resolve(c *Config) error {
if s.shouldResolveResourceGroup(c) {
resourceGroupName, err := s.findVirtualNetworkResourceGroup(s.client, c.VirtualNetworkName)
if err != nil {
return err
}
subnetName, err := s.findVirtualNetworkSubnet(s.client, resourceGroupName, c.VirtualNetworkName)
if err != nil {
return err
}
c.VirtualNetworkResourceGroupName = resourceGroupName
c.VirtualNetworkSubnetName = subnetName
}
if s.shouldResolveManagedImageName(c) {
image, err := findManagedImageByName(s.client, c.CustomManagedImageName, c.CustomManagedImageResourceGroupName)
if err != nil {
return err
}
c.customManagedImageID = *image.ID
}
return nil
}
func (s *resourceResolver) shouldResolveResourceGroup(c *Config) bool {
return c.VirtualNetworkName != "" && c.VirtualNetworkResourceGroupName == ""
}
func (s *resourceResolver) shouldResolveManagedImageName(c *Config) bool {
return c.CustomManagedImageName != ""
}
func getResourceGroupNameFromId(id string) string {
// "/subscriptions/3f499422-dd76-4114-8859-86d526c9deb6/resourceGroups/packer-Resource-Group-yylnwsl30j/providers/...
xs := strings.Split(id, "/")
return xs[4]
}
func findManagedImageByName(client *AzureClient, name, resourceGroupName string) (*compute.Image, error) {
images, err := client.ImagesClient.ListByResourceGroupComplete(context.TODO(), resourceGroupName)
if err != nil {
return nil, err
}
for images.NotDone() {
image := images.Value()
if strings.EqualFold(name, *image.Name) {
return &image, nil
}
if err = images.Next(); err != nil {
return nil, err
}
}
return nil, fmt.Errorf("Cannot find an image named '%s' in the resource group '%s'", name, resourceGroupName)
}
func findVirtualNetworkResourceGroup(client *AzureClient, name string) (string, error) {
virtualNetworks, err := client.VirtualNetworksClient.ListAllComplete(context.TODO())
if err != nil {
return "", err
}
resourceGroupNames := make([]string, 0)
for virtualNetworks.NotDone() {
virtualNetwork := virtualNetworks.Value()
if strings.EqualFold(name, *virtualNetwork.Name) {
rgn := getResourceGroupNameFromId(*virtualNetwork.ID)
resourceGroupNames = append(resourceGroupNames, rgn)
}
if err = virtualNetworks.Next(); err != nil {
return "", err
}
}
if len(resourceGroupNames) == 0 {
return "", fmt.Errorf("Cannot find a resource group with a virtual network called %q", name)
}
if len(resourceGroupNames) > 1 {
return "", fmt.Errorf("Found multiple resource groups with a virtual network called %q, please use virtual_network_subnet_name and virtual_network_resource_group_name to disambiguate", name)
}
return resourceGroupNames[0], nil
}
func findVirtualNetworkSubnet(client *AzureClient, resourceGroupName string, name string) (string, error) {
subnets, err := client.SubnetsClient.List(context.TODO(), resourceGroupName, name)
if err != nil {
return "", err
}
subnetList := subnets.Values() // only first page of subnets, but only interested in ==0 or >1
if len(subnetList) == 0 {
return "", fmt.Errorf("Cannot find a subnet in the resource group %q associated with the virtual network called %q", resourceGroupName, name)
}
if len(subnetList) > 1 {
return "", fmt.Errorf("Found multiple subnets in the resource group %q associated with the virtual network called %q, please use virtual_network_subnet_name and virtual_network_resource_group_name to disambiguate", resourceGroupName, name)
}
subnet := subnetList[0]
return *subnet.Name, nil
}

View File

@ -1,79 +0,0 @@
package arm
import (
"testing"
)
func TestResourceResolverIgnoresEmptyVirtualNetworkName(t *testing.T) {
var c Config
c.Prepare(getArmBuilderConfiguration(), getPackerConfiguration())
if c.VirtualNetworkName != "" {
t.Fatalf("Expected VirtualNetworkName to be empty by default")
}
sut := newTestResourceResolver()
sut.findVirtualNetworkResourceGroup = nil // assert that this is not even called
sut.Resolve(&c)
if c.VirtualNetworkName != "" {
t.Fatalf("Expected VirtualNetworkName to be empty")
}
if c.VirtualNetworkResourceGroupName != "" {
t.Fatalf("Expected VirtualNetworkResourceGroupName to be empty")
}
}
// If the user fully specified the virtual network name and resource group then
// there is no need to do a lookup.
func TestResourceResolverIgnoresSetVirtualNetwork(t *testing.T) {
var c Config
c.Prepare(getArmBuilderConfiguration(), getPackerConfiguration())
c.VirtualNetworkName = "--virtual-network-name--"
c.VirtualNetworkResourceGroupName = "--virtual-network-resource-group-name--"
c.VirtualNetworkSubnetName = "--virtual-network-subnet-name--"
sut := newTestResourceResolver()
sut.findVirtualNetworkResourceGroup = nil // assert that this is not even called
sut.findVirtualNetworkSubnet = nil // assert that this is not even called
sut.Resolve(&c)
if c.VirtualNetworkName != "--virtual-network-name--" {
t.Fatalf("Expected VirtualNetworkName to be --virtual-network-name--")
}
if c.VirtualNetworkResourceGroupName != "--virtual-network-resource-group-name--" {
t.Fatalf("Expected VirtualNetworkResourceGroupName to be --virtual-network-resource-group-name--")
}
if c.VirtualNetworkSubnetName != "--virtual-network-subnet-name--" {
t.Fatalf("Expected VirtualNetworkSubnetName to be --virtual-network-subnet-name--")
}
}
// If the user set virtual network name then the code should resolve virtual network
// resource group name.
func TestResourceResolverSetVirtualNetworkResourceGroupName(t *testing.T) {
var c Config
c.Prepare(getArmBuilderConfiguration(), getPackerConfiguration())
c.VirtualNetworkName = "--virtual-network-name--"
sut := newTestResourceResolver()
sut.Resolve(&c)
if c.VirtualNetworkResourceGroupName != "findVirtualNetworkResourceGroup is mocked" {
t.Fatalf("Expected VirtualNetworkResourceGroupName to be 'findVirtualNetworkResourceGroup is mocked'")
}
if c.VirtualNetworkSubnetName != "findVirtualNetworkSubnet is mocked" {
t.Fatalf("Expected findVirtualNetworkSubnet to be 'findVirtualNetworkSubnet is mocked'")
}
}
func newTestResourceResolver() resourceResolver {
return resourceResolver{
client: nil,
findVirtualNetworkResourceGroup: func(*AzureClient, string) (string, error) {
return "findVirtualNetworkResourceGroup is mocked", nil
},
findVirtualNetworkSubnet: func(*AzureClient, string, string) (string, error) {
return "findVirtualNetworkSubnet is mocked", nil
},
}
}

View File

@ -1,20 +0,0 @@
package arm
import (
"github.com/hashicorp/packer-plugin-sdk/multistep"
"github.com/hashicorp/packer/builder/azure/common/constants"
)
func processStepResult(
err error, sayError func(error), state multistep.StateBag) multistep.StepAction {
if err != nil {
state.Put(constants.Error, err)
sayError(err)
return multistep.ActionHalt
}
return multistep.ActionContinue
}

View File

@ -1,120 +0,0 @@
package arm
import (
"context"
"fmt"
"github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute"
"github.com/hashicorp/packer-plugin-sdk/multistep"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
"github.com/hashicorp/packer/builder/azure/common/constants"
)
type StepCaptureImage struct {
client *AzureClient
generalizeVM func(resourceGroupName, computeName string) error
captureVhd func(ctx context.Context, resourceGroupName string, computeName string, parameters *compute.VirtualMachineCaptureParameters) error
captureManagedImage func(ctx context.Context, resourceGroupName string, computeName string, parameters *compute.Image) error
get func(client *AzureClient) *CaptureTemplate
say func(message string)
error func(e error)
}
func NewStepCaptureImage(client *AzureClient, ui packersdk.Ui) *StepCaptureImage {
var step = &StepCaptureImage{
client: client,
get: func(client *AzureClient) *CaptureTemplate {
return client.Template
},
say: func(message string) {
ui.Say(message)
},
error: func(e error) {
ui.Error(e.Error())
},
}
step.generalizeVM = step.generalize
step.captureVhd = step.captureImage
step.captureManagedImage = step.captureImageFromVM
return step
}
func (s *StepCaptureImage) generalize(resourceGroupName string, computeName string) error {
_, err := s.client.Generalize(context.TODO(), resourceGroupName, computeName)
if err != nil {
s.say(s.client.LastError.Error())
}
return err
}
func (s *StepCaptureImage) captureImageFromVM(ctx context.Context, resourceGroupName string, imageName string, image *compute.Image) error {
f, err := s.client.ImagesClient.CreateOrUpdate(ctx, resourceGroupName, imageName, *image)
if err != nil {
s.say(s.client.LastError.Error())
}
return f.WaitForCompletionRef(ctx, s.client.ImagesClient.Client)
}
func (s *StepCaptureImage) captureImage(ctx context.Context, resourceGroupName string, computeName string, parameters *compute.VirtualMachineCaptureParameters) error {
f, err := s.client.VirtualMachinesClient.Capture(ctx, resourceGroupName, computeName, *parameters)
if err != nil {
s.say(s.client.LastError.Error())
}
return f.WaitForCompletionRef(ctx, s.client.VirtualMachinesClient.Client)
}
func (s *StepCaptureImage) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction {
s.say("Capturing image ...")
var computeName = state.Get(constants.ArmComputeName).(string)
var location = state.Get(constants.ArmLocation).(string)
var resourceGroupName = state.Get(constants.ArmResourceGroupName).(string)
var vmCaptureParameters = state.Get(constants.ArmVirtualMachineCaptureParameters).(*compute.VirtualMachineCaptureParameters)
var imageParameters = state.Get(constants.ArmImageParameters).(*compute.Image)
var isManagedImage = state.Get(constants.ArmIsManagedImage).(bool)
var targetManagedImageResourceGroupName = state.Get(constants.ArmManagedImageResourceGroupName).(string)
var targetManagedImageName = state.Get(constants.ArmManagedImageName).(string)
var targetManagedImageLocation = state.Get(constants.ArmLocation).(string)
s.say(fmt.Sprintf(" -> Compute ResourceGroupName : '%s'", resourceGroupName))
s.say(fmt.Sprintf(" -> Compute Name : '%s'", computeName))
s.say(fmt.Sprintf(" -> Compute Location : '%s'", location))
err := s.generalizeVM(resourceGroupName, computeName)
if err == nil {
if isManagedImage {
s.say(fmt.Sprintf(" -> Image ResourceGroupName : '%s'", targetManagedImageResourceGroupName))
s.say(fmt.Sprintf(" -> Image Name : '%s'", targetManagedImageName))
s.say(fmt.Sprintf(" -> Image Location : '%s'", targetManagedImageLocation))
err = s.captureManagedImage(ctx, targetManagedImageResourceGroupName, targetManagedImageName, imageParameters)
} else {
err = s.captureVhd(ctx, resourceGroupName, computeName, vmCaptureParameters)
}
}
if err != nil {
state.Put(constants.Error, err)
s.error(err)
return multistep.ActionHalt
}
// HACK(chrboum): I do not like this. The capture method should be returning this value
// instead having to pass in another lambda.
//
// Having to resort to capturing the template via an inspector is hack, and once I can
// resolve that I can cleanup this code too. See the comments in azure_client.go for more
// details.
// [paulmey]: autorest.Future now has access to the last http.Response, but I'm not sure if
// the body is still accessible.
template := s.get(s.client)
state.Put(constants.ArmCaptureTemplate, template)
return multistep.ActionContinue
}
func (*StepCaptureImage) Cleanup(multistep.StateBag) {
}

View File

@ -1,137 +0,0 @@
package arm
import (
"context"
"fmt"
"testing"
"github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute"
"github.com/hashicorp/packer-plugin-sdk/multistep"
"github.com/hashicorp/packer/builder/azure/common/constants"
)
func TestStepCaptureImageShouldFailIfCaptureFails(t *testing.T) {
var testSubject = &StepCaptureImage{
captureVhd: func(context.Context, string, string, *compute.VirtualMachineCaptureParameters) error {
return fmt.Errorf("!! Unit Test FAIL !!")
},
generalizeVM: func(string, string) error {
return nil
},
get: func(client *AzureClient) *CaptureTemplate {
return nil
},
say: func(message string) {},
error: func(e error) {},
}
stateBag := createTestStateBagStepCaptureImage()
var result = testSubject.Run(context.Background(), stateBag)
if result != multistep.ActionHalt {
t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result)
}
if _, ok := stateBag.GetOk(constants.Error); ok == false {
t.Fatalf("Expected the step to set stateBag['%s'], but it was not.", constants.Error)
}
}
func TestStepCaptureImageShouldPassIfCapturePasses(t *testing.T) {
var testSubject = &StepCaptureImage{
captureVhd: func(context.Context, string, string, *compute.VirtualMachineCaptureParameters) error { return nil },
generalizeVM: func(string, string) error {
return nil
},
get: func(client *AzureClient) *CaptureTemplate {
return nil
},
say: func(message string) {},
error: func(e error) {},
}
stateBag := createTestStateBagStepCaptureImage()
var result = testSubject.Run(context.Background(), stateBag)
if result != multistep.ActionContinue {
t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result)
}
if _, ok := stateBag.GetOk(constants.Error); ok == true {
t.Fatalf("Expected the step to not set stateBag['%s'], but it was.", constants.Error)
}
}
func TestStepCaptureImageShouldTakeStepArgumentsFromStateBag(t *testing.T) {
cancelCh := make(chan<- struct{})
defer close(cancelCh)
var actualResourceGroupName string
var actualComputeName string
var actualVirtualMachineCaptureParameters *compute.VirtualMachineCaptureParameters
actualCaptureTemplate := &CaptureTemplate{
Schema: "!! Unit Test !!",
}
var testSubject = &StepCaptureImage{
captureVhd: func(ctx context.Context, resourceGroupName string, computeName string, parameters *compute.VirtualMachineCaptureParameters) error {
actualResourceGroupName = resourceGroupName
actualComputeName = computeName
actualVirtualMachineCaptureParameters = parameters
return nil
},
generalizeVM: func(string, string) error {
return nil
},
get: func(client *AzureClient) *CaptureTemplate {
return actualCaptureTemplate
},
say: func(message string) {},
error: func(e error) {},
}
stateBag := createTestStateBagStepCaptureImage()
var result = testSubject.Run(context.Background(), stateBag)
if result != multistep.ActionContinue {
t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result)
}
var expectedComputeName = stateBag.Get(constants.ArmComputeName).(string)
var expectedResourceGroupName = stateBag.Get(constants.ArmResourceGroupName).(string)
var expectedVirtualMachineCaptureParameters = stateBag.Get(constants.ArmVirtualMachineCaptureParameters).(*compute.VirtualMachineCaptureParameters)
var expectedCaptureTemplate = stateBag.Get(constants.ArmCaptureTemplate).(*CaptureTemplate)
if actualComputeName != expectedComputeName {
t.Fatal("Expected StepCaptureImage to source 'constants.ArmComputeName' from the state bag, but it did not.")
}
if actualResourceGroupName != expectedResourceGroupName {
t.Fatal("Expected StepCaptureImage to source 'constants.ArmResourceGroupName' from the state bag, but it did not.")
}
if actualVirtualMachineCaptureParameters != expectedVirtualMachineCaptureParameters {
t.Fatal("Expected StepCaptureImage to source 'constants.ArmVirtualMachineCaptureParameters' from the state bag, but it did not.")
}
if actualCaptureTemplate != expectedCaptureTemplate {
t.Fatal("Expected StepCaptureImage to source 'constants.ArmCaptureTemplate' from the state bag, but it did not.")
}
}
func createTestStateBagStepCaptureImage() multistep.StateBag {
stateBag := new(multistep.BasicStateBag)
stateBag.Put(constants.ArmLocation, "localhost")
stateBag.Put(constants.ArmComputeName, "Unit Test: ComputeName")
stateBag.Put(constants.ArmResourceGroupName, "Unit Test: ResourceGroupName")
stateBag.Put(constants.ArmVirtualMachineCaptureParameters, &compute.VirtualMachineCaptureParameters{})
stateBag.Put(constants.ArmIsManagedImage, false)
stateBag.Put(constants.ArmManagedImageResourceGroupName, "")
stateBag.Put(constants.ArmManagedImageName, "")
stateBag.Put(constants.ArmImageParameters, &compute.Image{})
return stateBag
}

View File

@ -1,45 +0,0 @@
package arm
import (
"context"
"fmt"
"github.com/hashicorp/packer-plugin-sdk/multistep"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
"github.com/hashicorp/packer/builder/azure/common"
"github.com/hashicorp/packer/builder/azure/common/constants"
)
type StepCertificateInKeyVault struct {
config *Config
client common.AZVaultClientIface
say func(message string)
error func(e error)
}
func NewStepCertificateInKeyVault(cli common.AZVaultClientIface, ui packersdk.Ui, config *Config) *StepCertificateInKeyVault {
var step = &StepCertificateInKeyVault{
client: cli,
config: config,
say: func(message string) { ui.Say(message) },
error: func(e error) { ui.Error(e.Error()) },
}
return step
}
func (s *StepCertificateInKeyVault) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction {
s.say("Setting the certificate in the KeyVault...")
var keyVaultName = state.Get(constants.ArmKeyVaultName).(string)
err := s.client.SetSecret(keyVaultName, DefaultSecretName, s.config.winrmCertificate)
if err != nil {
s.error(fmt.Errorf("Error setting winrm cert in custom keyvault: %s", err))
return multistep.ActionHalt
}
return multistep.ActionContinue
}
func (*StepCertificateInKeyVault) Cleanup(multistep.StateBag) {
}

View File

@ -1,66 +0,0 @@
package arm
import (
"bytes"
"context"
"testing"
"github.com/hashicorp/packer-plugin-sdk/multistep"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
azcommon "github.com/hashicorp/packer/builder/azure/common"
"github.com/hashicorp/packer/builder/azure/common/constants"
)
func TestNewStepCertificateInKeyVault(t *testing.T) {
cli := azcommon.MockAZVaultClient{}
ui := &packersdk.BasicUi{
Reader: new(bytes.Buffer),
Writer: new(bytes.Buffer),
}
state := new(multistep.BasicStateBag)
state.Put(constants.ArmKeyVaultName, "testKeyVaultName")
config := &Config{
winrmCertificate: "testCertificateString",
}
certKVStep := NewStepCertificateInKeyVault(&cli, ui, config)
stepAction := certKVStep.Run(context.TODO(), state)
if stepAction == multistep.ActionHalt {
t.Fatalf("step should have succeeded.")
}
if !cli.SetSecretCalled {
t.Fatalf("Step should have called SetSecret on Azure client.")
}
if cli.SetSecretCert != "testCertificateString" {
t.Fatalf("Step should have read cert from winRMCertificate field on config.")
}
if cli.SetSecretVaultName != "testKeyVaultName" {
t.Fatalf("step should have read keyvault name from state.")
}
}
func TestNewStepCertificateInKeyVault_error(t *testing.T) {
// Tell mock to return an error
cli := azcommon.MockAZVaultClient{}
cli.IsError = true
ui := &packersdk.BasicUi{
Reader: new(bytes.Buffer),
Writer: new(bytes.Buffer),
}
state := new(multistep.BasicStateBag)
state.Put(constants.ArmKeyVaultName, "testKeyVaultName")
config := &Config{
winrmCertificate: "testCertificateString",
}
certKVStep := NewStepCertificateInKeyVault(&cli, ui, config)
stepAction := certKVStep.Run(context.TODO(), state)
if stepAction != multistep.ActionHalt {
t.Fatalf("step should have failed.")
}
}

View File

@ -1,146 +0,0 @@
package arm
import (
"context"
"errors"
"fmt"
"github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources"
"github.com/hashicorp/packer-plugin-sdk/multistep"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
"github.com/hashicorp/packer/builder/azure/common/constants"
)
type StepCreateResourceGroup struct {
client *AzureClient
create func(ctx context.Context, resourceGroupName string, location string, tags map[string]*string) error
say func(message string)
error func(e error)
exists func(ctx context.Context, resourceGroupName string) (bool, error)
}
func NewStepCreateResourceGroup(client *AzureClient, ui packersdk.Ui) *StepCreateResourceGroup {
var step = &StepCreateResourceGroup{
client: client,
say: func(message string) { ui.Say(message) },
error: func(e error) { ui.Error(e.Error()) },
}
step.create = step.createResourceGroup
step.exists = step.doesResourceGroupExist
return step
}
func (s *StepCreateResourceGroup) createResourceGroup(ctx context.Context, resourceGroupName string, location string, tags map[string]*string) error {
_, err := s.client.GroupsClient.CreateOrUpdate(ctx, resourceGroupName, resources.Group{
Location: &location,
Tags: tags,
})
if err != nil {
s.say(s.client.LastError.Error())
}
return err
}
func (s *StepCreateResourceGroup) doesResourceGroupExist(ctx context.Context, resourceGroupName string) (bool, error) {
exists, err := s.client.GroupsClient.CheckExistence(ctx, resourceGroupName)
if err != nil {
s.say(s.client.LastError.Error())
}
return exists.Response.StatusCode != 404, err
}
func (s *StepCreateResourceGroup) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction {
var doubleResource, ok = state.GetOk(constants.ArmDoubleResourceGroupNameSet)
if ok && doubleResource.(bool) {
err := errors.New("You have filled in both temp_resource_group_name and build_resource_group_name. Please choose one.")
return processStepResult(err, s.error, state)
}
var resourceGroupName = state.Get(constants.ArmResourceGroupName).(string)
var location = state.Get(constants.ArmLocation).(string)
tags, ok := state.Get(constants.ArmTags).(map[string]*string)
if !ok {
err := fmt.Errorf("failed to extract tags from state bag")
state.Put(constants.Error, err)
s.error(err)
return multistep.ActionHalt
}
exists, err := s.exists(ctx, resourceGroupName)
if err != nil {
return processStepResult(err, s.error, state)
}
configThinksExists := state.Get(constants.ArmIsExistingResourceGroup).(bool)
if exists != configThinksExists {
if configThinksExists {
err = errors.New("The resource group you want to use does not exist yet. Please use temp_resource_group_name to create a temporary resource group.")
} else {
err = errors.New("A resource group with that name already exists. Please use build_resource_group_name to use an existing resource group.")
}
return processStepResult(err, s.error, state)
}
// If the resource group exists, we may not have permissions to update it so we don't.
if !exists {
s.say("Creating resource group ...")
s.say(fmt.Sprintf(" -> ResourceGroupName : '%s'", resourceGroupName))
s.say(fmt.Sprintf(" -> Location : '%s'", location))
s.say(fmt.Sprintf(" -> Tags :"))
for k, v := range tags {
s.say(fmt.Sprintf(" ->> %s : %s", k, *v))
}
err = s.create(ctx, resourceGroupName, location, tags)
if err == nil {
state.Put(constants.ArmIsResourceGroupCreated, true)
}
} else {
s.say("Using existing resource group ...")
s.say(fmt.Sprintf(" -> ResourceGroupName : '%s'", resourceGroupName))
s.say(fmt.Sprintf(" -> Location : '%s'", location))
state.Put(constants.ArmIsResourceGroupCreated, true)
}
return processStepResult(err, s.error, state)
}
func (s *StepCreateResourceGroup) Cleanup(state multistep.StateBag) {
isCreated, ok := state.GetOk(constants.ArmIsResourceGroupCreated)
if !ok || !isCreated.(bool) {
return
}
ui := state.Get("ui").(packersdk.Ui)
if state.Get(constants.ArmIsExistingResourceGroup).(bool) {
ui.Say("\nThe resource group was not created by Packer, not deleting ...")
return
}
ctx := context.TODO()
resourceGroupName := state.Get(constants.ArmResourceGroupName).(string)
if exists, err := s.exists(ctx, resourceGroupName); !exists || err != nil {
return
}
ui.Say("\nCleanup requested, deleting resource group ...")
f, err := s.client.GroupsClient.Delete(ctx, resourceGroupName)
if err == nil {
if state.Get(constants.ArmAsyncResourceGroupDelete).(bool) {
s.say(fmt.Sprintf("\n Not waiting for Resource Group delete as requested by user. Resource Group Name is %s", resourceGroupName))
} else {
err = f.WaitForCompletionRef(ctx, s.client.GroupsClient.Client)
}
}
if err != nil {
ui.Error(fmt.Sprintf("Error deleting resource group. Please delete it manually.\n\n"+
"Name: %s\n"+
"Error: %s", resourceGroupName, err))
return
}
if !state.Get(constants.ArmAsyncResourceGroupDelete).(bool) {
ui.Say("Resource group has been deleted.")
}
}

View File

@ -1,251 +0,0 @@
package arm
import (
"context"
"errors"
"fmt"
"testing"
"github.com/hashicorp/packer-plugin-sdk/multistep"
"github.com/hashicorp/packer/builder/azure/common/constants"
)
func TestStepCreateResourceGroupShouldFailIfBothGroupNames(t *testing.T) {
stateBag := new(multistep.BasicStateBag)
stateBag.Put(constants.ArmDoubleResourceGroupNameSet, true)
value := "Unit Test: Tags"
tags := map[string]*string{
"tag01": &value,
}
stateBag.Put(constants.ArmTags, tags)
var testSubject = &StepCreateResourceGroup{
create: func(context.Context, string, string, map[string]*string) error { return nil },
say: func(message string) {},
error: func(e error) {},
exists: func(context.Context, string) (bool, error) { return false, nil },
}
var result = testSubject.Run(context.Background(), stateBag)
if result != multistep.ActionHalt {
t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result)
}
if _, ok := stateBag.GetOk(constants.Error); ok == false {
t.Fatalf("Expected the step to set stateBag['%s'], but it was not.", constants.Error)
}
}
func TestStepCreateResourceGroupShouldFailIfCreateFails(t *testing.T) {
var testSubject = &StepCreateResourceGroup{
create: func(context.Context, string, string, map[string]*string) error {
return fmt.Errorf("!! Unit Test FAIL !!")
},
say: func(message string) {},
error: func(e error) {},
exists: func(context.Context, string) (bool, error) { return false, nil },
}
stateBag := createTestStateBagStepCreateResourceGroup()
var result = testSubject.Run(context.Background(), stateBag)
if result != multistep.ActionHalt {
t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result)
}
if _, ok := stateBag.GetOk(constants.Error); ok == false {
t.Fatalf("Expected the step to set stateBag['%s'], but it was not.", constants.Error)
}
}
func TestStepCreateResourceGroupShouldFailIfExistsFails(t *testing.T) {
var testSubject = &StepCreateResourceGroup{
create: func(context.Context, string, string, map[string]*string) error { return nil },
say: func(message string) {},
error: func(e error) {},
exists: func(context.Context, string) (bool, error) { return false, errors.New("FAIL") },
}
stateBag := createTestStateBagStepCreateResourceGroup()
var result = testSubject.Run(context.Background(), stateBag)
if result != multistep.ActionHalt {
t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result)
}
if _, ok := stateBag.GetOk(constants.Error); ok == false {
t.Fatalf("Expected the step to set stateBag['%s'], but it was not.", constants.Error)
}
}
func TestStepCreateResourceGroupShouldPassIfCreatePasses(t *testing.T) {
var testSubject = &StepCreateResourceGroup{
create: func(context.Context, string, string, map[string]*string) error { return nil },
say: func(message string) {},
error: func(e error) {},
exists: func(context.Context, string) (bool, error) { return false, nil },
}
stateBag := createTestStateBagStepCreateResourceGroup()
var result = testSubject.Run(context.Background(), stateBag)
if result != multistep.ActionContinue {
t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result)
}
if _, ok := stateBag.GetOk(constants.Error); ok == true {
t.Fatalf("Expected the step to not set stateBag['%s'], but it was.", constants.Error)
}
}
func TestStepCreateResourceGroupShouldTakeStepArgumentsFromStateBag(t *testing.T) {
var actualResourceGroupName string
var actualLocation string
var actualTags map[string]*string
var testSubject = &StepCreateResourceGroup{
create: func(ctx context.Context, resourceGroupName string, location string, tags map[string]*string) error {
actualResourceGroupName = resourceGroupName
actualLocation = location
actualTags = tags
return nil
},
say: func(message string) {},
error: func(e error) {},
exists: func(context.Context, string) (bool, error) { return false, nil },
}
stateBag := createTestStateBagStepCreateResourceGroup()
var result = testSubject.Run(context.Background(), stateBag)
if result != multistep.ActionContinue {
t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result)
}
var expectedResourceGroupName = stateBag.Get(constants.ArmResourceGroupName).(string)
var expectedLocation = stateBag.Get(constants.ArmLocation).(string)
var expectedTags = stateBag.Get(constants.ArmTags).(map[string]*string)
if actualResourceGroupName != expectedResourceGroupName {
t.Fatal("Expected the step to source 'constants.ArmResourceGroupName' from the state bag, but it did not.")
}
if actualLocation != expectedLocation {
t.Fatal("Expected the step to source 'constants.ArmResourceGroupName' from the state bag, but it did not.")
}
if len(expectedTags) != len(actualTags) && expectedTags["tag01"] != actualTags["tag01"] {
t.Fatal("Expected the step to source 'constants.ArmTags' from the state bag, but it did not.")
}
_, ok := stateBag.GetOk(constants.ArmIsResourceGroupCreated)
if !ok {
t.Fatal("Expected the step to add item to stateBag['constants.ArmIsResourceGroupCreated'], but it did not.")
}
}
func TestStepCreateResourceGroupMarkShouldFailIfTryingExistingButDoesntExist(t *testing.T) {
var testSubject = &StepCreateResourceGroup{
create: func(context.Context, string, string, map[string]*string) error {
return fmt.Errorf("!! Unit Test FAIL !!")
},
say: func(message string) {},
error: func(e error) {},
exists: func(context.Context, string) (bool, error) { return false, nil },
}
stateBag := createTestExistingStateBagStepCreateResourceGroup()
var result = testSubject.Run(context.Background(), stateBag)
if result != multistep.ActionHalt {
t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result)
}
if _, ok := stateBag.GetOk(constants.Error); ok == false {
t.Fatalf("Expected the step to set stateBag['%s'], but it was not.", constants.Error)
}
}
func TestStepCreateResourceGroupMarkShouldFailIfTryingTempButExist(t *testing.T) {
var testSubject = &StepCreateResourceGroup{
create: func(context.Context, string, string, map[string]*string) error {
return fmt.Errorf("!! Unit Test FAIL !!")
},
say: func(message string) {},
error: func(e error) {},
exists: func(context.Context, string) (bool, error) { return true, nil },
}
stateBag := createTestStateBagStepCreateResourceGroup()
var result = testSubject.Run(context.Background(), stateBag)
if result != multistep.ActionHalt {
t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result)
}
if _, ok := stateBag.GetOk(constants.Error); ok == false {
t.Fatalf("Expected the step to set stateBag['%s'], but it was not.", constants.Error)
}
}
func createTestStateBagStepCreateResourceGroup() multistep.StateBag {
stateBag := new(multistep.BasicStateBag)
stateBag.Put(constants.ArmLocation, "Unit Test: Location")
stateBag.Put(constants.ArmResourceGroupName, "Unit Test: ResourceGroupName")
stateBag.Put(constants.ArmIsExistingResourceGroup, false)
value := "Unit Test: Tags"
tags := map[string]*string{
"tag01": &value,
}
stateBag.Put(constants.ArmTags, tags)
return stateBag
}
func createTestExistingStateBagStepCreateResourceGroup() multistep.StateBag {
stateBag := new(multistep.BasicStateBag)
stateBag.Put(constants.ArmLocation, "Unit Test: Location")
stateBag.Put(constants.ArmResourceGroupName, "Unit Test: ResourceGroupName")
stateBag.Put(constants.ArmIsExistingResourceGroup, true)
value := "Unit Test: Tags"
tags := map[string]*string{
"tag01": &value,
}
stateBag.Put(constants.ArmTags, tags)
return stateBag
}
func TestStepCreateResourceGroupShouldFailIfTagsFailCast(t *testing.T) {
stateBag := new(multistep.BasicStateBag)
stateBag.Put(constants.ArmLocation, "Unit Test: Location")
stateBag.Put(constants.ArmResourceGroupName, "Unit Test: ResourceGroupName")
stateBag.Put(constants.ArmIsExistingResourceGroup, true)
value := "Unit Test: Tags"
tags := map[string]string{
"tag01": value,
}
stateBag.Put(constants.ArmTags, tags)
var testSubject = &StepCreateResourceGroup{
create: func(context.Context, string, string, map[string]*string) error { return nil },
say: func(message string) {},
error: func(e error) {},
exists: func(context.Context, string) (bool, error) { return false, nil },
}
var result = testSubject.Run(context.Background(), stateBag)
if result != multistep.ActionHalt {
t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result)
}
if _, ok := stateBag.GetOk(constants.Error); ok == false {
t.Fatalf("Expected the step to set stateBag['%s'], but it was not.", constants.Error)
}
}

View File

@ -1,118 +0,0 @@
package arm
import (
"context"
"errors"
"fmt"
"net/url"
"strings"
"github.com/hashicorp/packer/builder/azure/common/constants"
"github.com/hashicorp/packer-plugin-sdk/multistep"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
)
type StepDeleteAdditionalDisk struct {
client *AzureClient
delete func(string, string) error
deleteManaged func(context.Context, string, string) error
say func(message string)
error func(e error)
}
func NewStepDeleteAdditionalDisks(client *AzureClient, ui packersdk.Ui) *StepDeleteAdditionalDisk {
var step = &StepDeleteAdditionalDisk{
client: client,
say: func(message string) { ui.Say(message) },
error: func(e error) { ui.Error(e.Error()) },
}
step.delete = step.deleteBlob
step.deleteManaged = step.deleteManagedDisk
return step
}
func (s *StepDeleteAdditionalDisk) deleteBlob(storageContainerName string, blobName string) error {
blob := s.client.BlobStorageClient.GetContainerReference(storageContainerName).GetBlobReference(blobName)
_, err := blob.BreakLease(nil)
if err != nil && !strings.Contains(err.Error(), "LeaseNotPresentWithLeaseOperation") {
s.say(s.client.LastError.Error())
return err
}
err = blob.Delete(nil)
if err != nil {
s.say(s.client.LastError.Error())
}
return err
}
func (s *StepDeleteAdditionalDisk) deleteManagedDisk(ctx context.Context, resourceGroupName string, diskName string) error {
xs := strings.Split(diskName, "/")
diskName = xs[len(xs)-1]
f, err := s.client.DisksClient.Delete(ctx, resourceGroupName, diskName)
if err == nil {
err = f.WaitForCompletionRef(ctx, s.client.DisksClient.Client)
}
return err
}
func (s *StepDeleteAdditionalDisk) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction {
s.say("Deleting the temporary Additional disk ...")
var dataDisks []string
if disks := state.Get(constants.ArmAdditionalDiskVhds); disks != nil {
dataDisks = disks.([]string)
}
var isManagedDisk = state.Get(constants.ArmIsManagedImage).(bool)
var isExistingResourceGroup = state.Get(constants.ArmIsExistingResourceGroup).(bool)
var resourceGroupName = state.Get(constants.ArmResourceGroupName).(string)
if dataDisks == nil {
s.say(fmt.Sprintf(" -> No Additional Disks specified"))
return multistep.ActionContinue
}
if isManagedDisk && !isExistingResourceGroup {
s.say(fmt.Sprintf(" -> Additional Disk : skipping, managed disk was used..."))
return multistep.ActionContinue
}
for i, additionaldisk := range dataDisks {
s.say(fmt.Sprintf(" -> Additional Disk %d: '%s'", i+1, additionaldisk))
var err error
if isManagedDisk {
err = s.deleteManaged(ctx, resourceGroupName, additionaldisk)
if err != nil {
s.say("Failed to delete the managed Additional Disk!")
return processStepResult(err, s.error, state)
}
continue
}
u, err := url.Parse(additionaldisk)
if err != nil {
s.say("Failed to parse the Additional Disk's VHD URI!")
return processStepResult(err, s.error, state)
}
xs := strings.Split(u.Path, "/")
if len(xs) < 3 {
err = errors.New("Failed to parse Additional Disk's VHD URI!")
} else {
var storageAccountName = xs[1]
var blobName = strings.Join(xs[2:], "/")
err = s.delete(storageAccountName, blobName)
}
if err != nil {
return processStepResult(err, s.error, state)
}
}
return multistep.ActionContinue
}
func (*StepDeleteAdditionalDisk) Cleanup(multistep.StateBag) {
}

View File

@ -1,227 +0,0 @@
package arm
import (
"context"
"errors"
"fmt"
"testing"
"github.com/hashicorp/packer-plugin-sdk/multistep"
"github.com/hashicorp/packer/builder/azure/common/constants"
)
func TestStepDeleteAdditionalDiskShouldFailIfGetFails(t *testing.T) {
var testSubject = &StepDeleteAdditionalDisk{
delete: func(string, string) error { return fmt.Errorf("!! Unit Test FAIL !!") },
deleteManaged: func(context.Context, string, string) error { return nil },
say: func(message string) {},
error: func(e error) {},
}
stateBag := DeleteTestStateBagStepDeleteAdditionalDisk([]string{"http://storage.blob.core.windows.net/images/pkrvm_os.vhd"})
var result = testSubject.Run(context.Background(), stateBag)
if result != multistep.ActionHalt {
t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result)
}
if _, ok := stateBag.GetOk(constants.Error); ok == false {
t.Fatalf("Expected the step to set stateBag['%s'], but it was not.", constants.Error)
}
}
func TestStepDeleteAdditionalDiskShouldPassIfGetPasses(t *testing.T) {
var testSubject = &StepDeleteAdditionalDisk{
delete: func(string, string) error { return nil },
say: func(message string) {},
error: func(e error) {},
}
stateBag := DeleteTestStateBagStepDeleteAdditionalDisk([]string{"http://storage.blob.core.windows.net/images/pkrvm_os.vhd"})
var result = testSubject.Run(context.Background(), stateBag)
if result != multistep.ActionContinue {
t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result)
}
if _, ok := stateBag.GetOk(constants.Error); ok == true {
t.Fatalf("Expected the step to not set stateBag['%s'], but it was.", constants.Error)
}
}
func TestStepDeleteAdditionalDiskShouldTakeStepArgumentsFromStateBag(t *testing.T) {
var actualStorageContainerName string
var actualBlobName string
var testSubject = &StepDeleteAdditionalDisk{
delete: func(storageContainerName string, blobName string) error {
actualStorageContainerName = storageContainerName
actualBlobName = blobName
return nil
},
say: func(message string) {},
error: func(e error) {},
}
stateBag := DeleteTestStateBagStepDeleteAdditionalDisk([]string{"http://storage.blob.core.windows.net/images/pkrvm_os.vhd"})
var result = testSubject.Run(context.Background(), stateBag)
if result != multistep.ActionContinue {
t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result)
}
if actualStorageContainerName != "images" {
t.Fatalf("Expected the storage container name to be 'images', but found '%s'.", actualStorageContainerName)
}
if actualBlobName != "pkrvm_os.vhd" {
t.Fatalf("Expected the blob name to be 'pkrvm_os.vhd', but found '%s'.", actualBlobName)
}
}
func TestStepDeleteAdditionalDiskShouldHandleComplexStorageContainerNames(t *testing.T) {
var actualStorageContainerName string
var actualBlobName string
var testSubject = &StepDeleteAdditionalDisk{
delete: func(storageContainerName string, blobName string) error {
actualStorageContainerName = storageContainerName
actualBlobName = blobName
return nil
},
say: func(message string) {},
error: func(e error) {},
}
stateBag := DeleteTestStateBagStepDeleteAdditionalDisk([]string{"http://storage.blob.core.windows.net/abc/def/pkrvm_os.vhd"})
testSubject.Run(context.Background(), stateBag)
if actualStorageContainerName != "abc" {
t.Fatalf("Expected the storage container name to be 'abc/def', but found '%s'.", actualStorageContainerName)
}
if actualBlobName != "def/pkrvm_os.vhd" {
t.Fatalf("Expected the blob name to be 'pkrvm_os.vhd', but found '%s'.", actualBlobName)
}
}
func TestStepDeleteAdditionalDiskShouldFailIfVHDNameCannotBeURLParsed(t *testing.T) {
var testSubject = &StepDeleteAdditionalDisk{
delete: func(string, string) error { return nil },
say: func(message string) {},
error: func(e error) {},
deleteManaged: func(context.Context, string, string) error { return nil },
}
// Invalid URL per https://golang.org/src/net/url/url_test.go
stateBag := DeleteTestStateBagStepDeleteAdditionalDisk([]string{"http://[fe80::1%en0]/"})
var result = testSubject.Run(context.Background(), stateBag)
if result != multistep.ActionHalt {
t.Fatalf("Expected the step to return 'ActionHalt', but got '%v'.", result)
}
if _, ok := stateBag.GetOk(constants.Error); ok == false {
t.Fatalf("Expected the step to not stateBag['%s'], but it was.", constants.Error)
}
}
func TestStepDeleteAdditionalDiskShouldFailIfVHDNameIsTooShort(t *testing.T) {
var testSubject = &StepDeleteAdditionalDisk{
delete: func(string, string) error { return nil },
say: func(message string) {},
error: func(e error) {},
deleteManaged: func(context.Context, string, string) error { return nil },
}
stateBag := DeleteTestStateBagStepDeleteAdditionalDisk([]string{"storage.blob.core.windows.net/abc"})
var result = testSubject.Run(context.Background(), stateBag)
if result != multistep.ActionHalt {
t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result)
}
if _, ok := stateBag.GetOk(constants.Error); ok == false {
t.Fatalf("Expected the step to not stateBag['%s'], but it was.", constants.Error)
}
}
func TestStepDeleteAdditionalDiskShouldPassIfManagedDiskInTempResourceGroup(t *testing.T) {
var testSubject = &StepDeleteAdditionalDisk{
delete: func(string, string) error { return nil },
say: func(message string) {},
error: func(e error) {},
}
stateBag := new(multistep.BasicStateBag)
stateBag.Put(constants.ArmAdditionalDiskVhds, []string{"subscriptions/123-456-789/resourceGroups/existingresourcegroup/providers/Microsoft.Compute/disks/osdisk"})
stateBag.Put(constants.ArmIsManagedImage, true)
stateBag.Put(constants.ArmIsExistingResourceGroup, false)
stateBag.Put(constants.ArmResourceGroupName, "testgroup")
var result = testSubject.Run(context.Background(), stateBag)
if result != multistep.ActionContinue {
t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result)
}
if _, ok := stateBag.GetOk(constants.Error); ok == true {
t.Fatalf("Expected the step to not set stateBag['%s'], but it was.", constants.Error)
}
}
func TestStepDeleteAdditionalDiskShouldFailIfManagedDiskInExistingResourceGroupFailsToDelete(t *testing.T) {
var testSubject = &StepDeleteAdditionalDisk{
delete: func(string, string) error { return nil },
say: func(message string) {},
error: func(e error) {},
deleteManaged: func(context.Context, string, string) error { return errors.New("UNIT TEST FAIL!") },
}
stateBag := new(multistep.BasicStateBag)
stateBag.Put(constants.ArmAdditionalDiskVhds, []string{"subscriptions/123-456-789/resourceGroups/existingresourcegroup/providers/Microsoft.Compute/disks/osdisk"})
stateBag.Put(constants.ArmIsManagedImage, true)
stateBag.Put(constants.ArmIsExistingResourceGroup, true)
stateBag.Put(constants.ArmResourceGroupName, "testgroup")
var result = testSubject.Run(context.Background(), stateBag)
if result != multistep.ActionHalt {
t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result)
}
if _, ok := stateBag.GetOk(constants.Error); ok == false {
t.Fatalf("Expected the step to not stateBag['%s'], but it was.", constants.Error)
}
}
func TestStepDeleteAdditionalDiskShouldFailIfManagedDiskInExistingResourceGroupIsDeleted(t *testing.T) {
var testSubject = &StepDeleteAdditionalDisk{
delete: func(string, string) error { return nil },
say: func(message string) {},
error: func(e error) {},
deleteManaged: func(context.Context, string, string) error { return nil },
}
stateBag := new(multistep.BasicStateBag)
stateBag.Put(constants.ArmAdditionalDiskVhds, []string{"subscriptions/123-456-789/resourceGroups/existingresourcegroup/providers/Microsoft.Compute/disks/osdisk"})
stateBag.Put(constants.ArmIsManagedImage, true)
stateBag.Put(constants.ArmIsExistingResourceGroup, true)
stateBag.Put(constants.ArmResourceGroupName, "testgroup")
var result = testSubject.Run(context.Background(), stateBag)
if result != multistep.ActionContinue {
t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result)
}
if _, ok := stateBag.GetOk(constants.Error); ok == true {
t.Fatalf("Expected the step to not set stateBag['%s'], but it was.", constants.Error)
}
}
func DeleteTestStateBagStepDeleteAdditionalDisk(osDiskVhds []string) multistep.StateBag {
stateBag := new(multistep.BasicStateBag)
stateBag.Put(constants.ArmAdditionalDiskVhds, osDiskVhds)
stateBag.Put(constants.ArmIsManagedImage, false)
stateBag.Put(constants.ArmIsExistingResourceGroup, false)
stateBag.Put(constants.ArmResourceGroupName, "testgroup")
return stateBag
}

View File

@ -1,305 +0,0 @@
package arm
import (
"context"
"errors"
"fmt"
"net/url"
"strings"
"sync"
"time"
"github.com/hashicorp/packer-plugin-sdk/multistep"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
"github.com/hashicorp/packer-plugin-sdk/retry"
"github.com/hashicorp/packer/builder/azure/common/constants"
)
type StepDeployTemplate struct {
client *AzureClient
deploy func(ctx context.Context, resourceGroupName string, deploymentName string) error
delete func(ctx context.Context, deploymentName, resourceGroupName string) error
disk func(ctx context.Context, resourceGroupName string, computeName string) (string, string, error)
deleteDisk func(ctx context.Context, imageType string, imageName string, resourceGroupName string) error
deleteDeployment func(ctx context.Context, state multistep.StateBag) error
say func(message string)
error func(e error)
config *Config
factory templateFactoryFunc
name string
}
func NewStepDeployTemplate(client *AzureClient, ui packersdk.Ui, config *Config, deploymentName string, factory templateFactoryFunc) *StepDeployTemplate {
var step = &StepDeployTemplate{
client: client,
say: func(message string) { ui.Say(message) },
error: func(e error) { ui.Error(e.Error()) },
config: config,
factory: factory,
name: deploymentName,
}
step.deploy = step.deployTemplate
step.delete = step.deleteDeploymentResources
step.disk = step.getImageDetails
step.deleteDisk = step.deleteImage
step.deleteDeployment = step.deleteDeploymentObject
return step
}
func (s *StepDeployTemplate) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction {
s.say("Deploying deployment template ...")
var resourceGroupName = state.Get(constants.ArmResourceGroupName).(string)
s.say(fmt.Sprintf(" -> ResourceGroupName : '%s'", resourceGroupName))
s.say(fmt.Sprintf(" -> DeploymentName : '%s'", s.name))
return processStepResult(
s.deploy(ctx, resourceGroupName, s.name),
s.error, state)
}
func (s *StepDeployTemplate) Cleanup(state multistep.StateBag) {
defer func() {
err := s.deleteDeployment(context.Background(), state)
if err != nil {
s.say(err.Error())
}
}()
ui := state.Get("ui").(packersdk.Ui)
ui.Say("\nDeleting individual resources ...")
deploymentName := s.name
resourceGroupName := state.Get(constants.ArmResourceGroupName).(string)
// Get image disk details before deleting the image; otherwise we won't be able to
// delete the disk as the image request will return a 404
computeName := state.Get(constants.ArmComputeName).(string)
imageType, imageName, err := s.disk(context.TODO(), resourceGroupName, computeName)
if err != nil && !strings.Contains(err.Error(), "ResourceNotFound") {
ui.Error(fmt.Sprintf("Could not retrieve OS Image details: %s", err))
}
err = s.delete(context.TODO(), deploymentName, resourceGroupName)
if err != nil {
s.reportIfError(err, resourceGroupName)
}
// The disk was not found on the VM, this is an error.
if imageType == "" && imageName == "" {
ui.Error(fmt.Sprintf("Failed to find temporary OS disk on VM. Please delete manually.\n\n"+
"VM Name: %s\n"+
"Error: %s", computeName, err))
return
}
ui.Say(fmt.Sprintf(" Deleting -> %s : '%s'", imageType, imageName))
err = s.deleteDisk(context.TODO(), imageType, imageName, resourceGroupName)
if err != nil {
ui.Error(fmt.Sprintf("Error deleting resource. Please delete manually.\n\n"+
"Name: %s\n"+
"Error: %s", imageName, err))
}
}
func (s *StepDeployTemplate) deployTemplate(ctx context.Context, resourceGroupName string, deploymentName string) error {
deployment, err := s.factory(s.config)
if err != nil {
return err
}
f, err := s.client.DeploymentsClient.CreateOrUpdate(ctx, resourceGroupName, deploymentName, *deployment)
if err != nil {
s.say(s.client.LastError.Error())
return err
}
err = f.WaitForCompletionRef(ctx, s.client.DeploymentsClient.Client)
if err == nil {
s.say(s.client.LastError.Error())
}
return err
}
func (s *StepDeployTemplate) deleteDeploymentObject(ctx context.Context, state multistep.StateBag) error {
deploymentName := s.name
resourceGroupName := state.Get(constants.ArmResourceGroupName).(string)
ui := state.Get("ui").(packersdk.Ui)
ui.Say(fmt.Sprintf("Removing the created Deployment object: '%s'", deploymentName))
f, err := s.client.DeploymentsClient.Delete(ctx, resourceGroupName, deploymentName)
if err != nil {
return err
}
return f.WaitForCompletionRef(ctx, s.client.DeploymentsClient.Client)
}
func (s *StepDeployTemplate) getImageDetails(ctx context.Context, resourceGroupName string, computeName string) (string, string, error) {
//We can't depend on constants.ArmOSDiskVhd being set
var imageName, imageType string
vm, err := s.client.VirtualMachinesClient.Get(ctx, resourceGroupName, computeName, "")
if err != nil {
return imageName, imageType, err
}
if vm.StorageProfile.OsDisk.Vhd != nil {
imageType = "image"
imageName = *vm.StorageProfile.OsDisk.Vhd.URI
return imageType, imageName, nil
}
imageType = "Microsoft.Compute/disks"
imageName = *vm.StorageProfile.OsDisk.ManagedDisk.ID
return imageType, imageName, nil
}
//TODO(paulmey): move to helpers file
func deleteResource(ctx context.Context, client *AzureClient, resourceType string, resourceName string, resourceGroupName string) error {
switch resourceType {
case "Microsoft.Compute/virtualMachines":
f, err := client.VirtualMachinesClient.Delete(ctx, resourceGroupName, resourceName)
if err == nil {
err = f.WaitForCompletionRef(ctx, client.VirtualMachinesClient.Client)
}
return err
case "Microsoft.KeyVault/vaults":
_, err := client.VaultClientDelete.Delete(ctx, resourceGroupName, resourceName)
return err
case "Microsoft.Network/networkInterfaces":
f, err := client.InterfacesClient.Delete(ctx, resourceGroupName, resourceName)
if err == nil {
err = f.WaitForCompletionRef(ctx, client.InterfacesClient.Client)
}
return err
case "Microsoft.Network/virtualNetworks":
f, err := client.VirtualNetworksClient.Delete(ctx, resourceGroupName, resourceName)
if err == nil {
err = f.WaitForCompletionRef(ctx, client.VirtualNetworksClient.Client)
}
return err
case "Microsoft.Network/networkSecurityGroups":
f, err := client.SecurityGroupsClient.Delete(ctx, resourceGroupName, resourceName)
if err == nil {
err = f.WaitForCompletionRef(ctx, client.SecurityGroupsClient.Client)
}
return err
case "Microsoft.Network/publicIPAddresses":
f, err := client.PublicIPAddressesClient.Delete(ctx, resourceGroupName, resourceName)
if err == nil {
err = f.WaitForCompletionRef(ctx, client.PublicIPAddressesClient.Client)
}
return err
}
return nil
}
func (s *StepDeployTemplate) deleteImage(ctx context.Context, imageType string, imageName string, resourceGroupName string) error {
// Managed disk
if imageType == "Microsoft.Compute/disks" {
xs := strings.Split(imageName, "/")
diskName := xs[len(xs)-1]
f, err := s.client.DisksClient.Delete(ctx, resourceGroupName, diskName)
if err == nil {
err = f.WaitForCompletionRef(ctx, s.client.DisksClient.Client)
}
return err
}
// VHD image
u, err := url.Parse(imageName)
if err != nil {
return err
}
xs := strings.Split(u.Path, "/")
if len(xs) < 3 {
return errors.New("Unable to parse path of image " + imageName)
}
var storageAccountName = xs[1]
var blobName = strings.Join(xs[2:], "/")
blob := s.client.BlobStorageClient.GetContainerReference(storageAccountName).GetBlobReference(blobName)
_, err = blob.BreakLease(nil)
if err != nil && !strings.Contains(err.Error(), "LeaseNotPresentWithLeaseOperation") {
s.say(s.client.LastError.Error())
return err
}
return blob.Delete(nil)
}
func (s *StepDeployTemplate) deleteDeploymentResources(ctx context.Context, deploymentName, resourceGroupName string) error {
var maxResources int32 = 50
deploymentOperations, err := s.client.DeploymentOperationsClient.ListComplete(ctx, resourceGroupName, deploymentName, &maxResources)
if err != nil {
s.reportIfError(err, resourceGroupName)
return err
}
resources := map[string]string{}
for deploymentOperations.NotDone() {
deploymentOperation := deploymentOperations.Value()
// Sometimes an empty operation is added to the list by Azure
if deploymentOperation.Properties.TargetResource == nil {
_ = deploymentOperations.Next()
continue
}
resourceName := *deploymentOperation.Properties.TargetResource.ResourceName
resourceType := *deploymentOperation.Properties.TargetResource.ResourceType
s.say(fmt.Sprintf("Adding to deletion queue -> %s : '%s'", resourceType, resourceName))
resources[resourceType] = resourceName
if err = deploymentOperations.Next(); err != nil {
return err
}
}
var wg sync.WaitGroup
wg.Add(len(resources))
for resourceType, resourceName := range resources {
go func(resourceType, resourceName string) {
defer wg.Done()
retryConfig := retry.Config{
Tries: 10,
RetryDelay: (&retry.Backoff{InitialBackoff: 10 * time.Second, MaxBackoff: 600 * time.Second, Multiplier: 2}).Linear,
}
err = retryConfig.Run(ctx, func(ctx context.Context) error {
s.say(fmt.Sprintf("Attempting deletion -> %s : '%s'", resourceType, resourceName))
err := deleteResource(ctx, s.client,
resourceType,
resourceName,
resourceGroupName)
if err != nil {
s.say(fmt.Sprintf("Error deleting resource. Will retry.\n"+
"Name: %s\n"+
"Error: %s\n", resourceName, err.Error()))
}
return err
})
if err != nil {
s.reportIfError(err, resourceName)
}
}(resourceType, resourceName)
}
s.say("Waiting for deletion of all resources...")
wg.Wait()
return nil
}
func (s *StepDeployTemplate) reportIfError(err error, resourceName string) {
if err != nil {
s.say(fmt.Sprintf("Error deleting resource. Please delete manually.\n\n"+
"Name: %s\n"+
"Error: %s", resourceName, err.Error()))
s.error(err)
}
}

View File

@ -1,205 +0,0 @@
package arm
import (
"context"
"fmt"
"testing"
"github.com/hashicorp/packer-plugin-sdk/multistep"
packersdk "github.com/hashicorp/packer-plugin-sdk/packer"
"github.com/hashicorp/packer/builder/azure/common/constants"
)
func TestStepDeployTemplateShouldFailIfDeployFails(t *testing.T) {
var testSubject = &StepDeployTemplate{
deploy: func(context.Context, string, string) error {
return fmt.Errorf("!! Unit Test FAIL !!")
},
say: func(message string) {},
error: func(e error) {},
}
stateBag := createTestStateBagStepDeployTemplate()
var result = testSubject.Run(context.Background(), stateBag)
if result != multistep.ActionHalt {
t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result)
}
if _, ok := stateBag.GetOk(constants.Error); ok == false {
t.Fatalf("Expected the step to set stateBag['%s'], but it was not.", constants.Error)
}
}
func TestStepDeployTemplateShouldPassIfDeployPasses(t *testing.T) {
var testSubject = &StepDeployTemplate{
deploy: func(context.Context, string, string) error { return nil },
say: func(message string) {},
error: func(e error) {},
}
stateBag := createTestStateBagStepDeployTemplate()
var result = testSubject.Run(context.Background(), stateBag)
if result != multistep.ActionContinue {
t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result)
}
if _, ok := stateBag.GetOk(constants.Error); ok == true {
t.Fatalf("Expected the step to not set stateBag['%s'], but it was.", constants.Error)
}
}
func TestStepDeployTemplateShouldTakeStepArgumentsFromStateBag(t *testing.T) {
var actualResourceGroupName string
var actualDeploymentName string
var testSubject = &StepDeployTemplate{
deploy: func(ctx context.Context, resourceGroupName string, deploymentName string) error {
actualResourceGroupName = resourceGroupName
actualDeploymentName = deploymentName
return nil
},
say: func(message string) {},
error: func(e error) {},
name: "--deployment-name--",
}
stateBag := createTestStateBagStepValidateTemplate()
var result = testSubject.Run(context.Background(), stateBag)
if result != multistep.ActionContinue {
t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result)
}
var expectedResourceGroupName = stateBag.Get(constants.ArmResourceGroupName).(string)
if actualDeploymentName != "--deployment-name--" {
t.Fatal("Expected StepValidateTemplate to source 'constants.ArmDeploymentName' from the state bag, but it did not.")
}
if actualResourceGroupName != expectedResourceGroupName {
t.Fatal("Expected the step to source 'constants.ArmResourceGroupName' from the state bag, but it did not.")
}
}
func TestStepDeployTemplateDeleteImageShouldFailWhenImageUrlCannotBeParsed(t *testing.T) {
var testSubject = &StepDeployTemplate{
say: func(message string) {},
error: func(e error) {},
name: "--deployment-name--",
}
// Invalid URL per https://golang.org/src/net/url/url_test.go
err := testSubject.deleteImage(context.TODO(), "image", "http://[fe80::1%en0]/", "Unit Test: ResourceGroupName")
if err == nil {
t.Fatal("Expected a failure because of the failed image name")
}
}
func TestStepDeployTemplateDeleteImageShouldFailWithInvalidImage(t *testing.T) {
var testSubject = &StepDeployTemplate{
say: func(message string) {},
error: func(e error) {},
name: "--deployment-name--",
}
err := testSubject.deleteImage(context.TODO(), "image", "storage.blob.core.windows.net/abc", "Unit Test: ResourceGroupName")
if err == nil {
t.Fatal("Expected a failure because of the failed image name")
}
}
func TestStepDeployTemplateCleanupShouldDeleteManagedOSImageInExistingResourceGroup(t *testing.T) {
var deleteDiskCounter = 0
var testSubject = createTestStepDeployTemplateDeleteOSImage(&deleteDiskCounter)
stateBag := createTestStateBagStepDeployTemplate()
stateBag.Put(constants.ArmIsManagedImage, true)
stateBag.Put(constants.ArmIsExistingResourceGroup, true)
stateBag.Put(constants.ArmIsResourceGroupCreated, true)
stateBag.Put("ui", packersdk.TestUi(t))
testSubject.Cleanup(stateBag)
if deleteDiskCounter != 1 {
t.Fatalf("Expected DeployTemplate Cleanup to invoke deleteDisk 1 time, but invoked %d times", deleteDiskCounter)
}
}
func TestStepDeployTemplateCleanupShouldDeleteManagedOSImageInTemporaryResourceGroup(t *testing.T) {
var deleteDiskCounter = 0
var testSubject = createTestStepDeployTemplateDeleteOSImage(&deleteDiskCounter)
stateBag := createTestStateBagStepDeployTemplate()
stateBag.Put(constants.ArmIsManagedImage, true)
stateBag.Put(constants.ArmIsExistingResourceGroup, false)
stateBag.Put(constants.ArmIsResourceGroupCreated, true)
stateBag.Put("ui", packersdk.TestUi(t))
testSubject.Cleanup(stateBag)
if deleteDiskCounter != 1 {
t.Fatalf("Expected DeployTemplate Cleanup to invoke deleteDisk 1 times, but invoked %d times", deleteDiskCounter)
}
}
func TestStepDeployTemplateCleanupShouldDeleteVHDOSImageInExistingResourceGroup(t *testing.T) {
var deleteDiskCounter = 0
var testSubject = createTestStepDeployTemplateDeleteOSImage(&deleteDiskCounter)
stateBag := createTestStateBagStepDeployTemplate()
stateBag.Put(constants.ArmIsManagedImage, false)
stateBag.Put(constants.ArmIsExistingResourceGroup, true)
stateBag.Put(constants.ArmIsResourceGroupCreated, true)
stateBag.Put("ui", packersdk.TestUi(t))
testSubject.Cleanup(stateBag)
if deleteDiskCounter != 1 {
t.Fatalf("Expected DeployTemplate Cleanup to invoke deleteDisk 1 time, but invoked %d times", deleteDiskCounter)
}
}
func TestStepDeployTemplateCleanupShouldVHDOSImageInTemporaryResourceGroup(t *testing.T) {
var deleteDiskCounter = 0
var testSubject = createTestStepDeployTemplateDeleteOSImage(&deleteDiskCounter)
stateBag := createTestStateBagStepDeployTemplate()
stateBag.Put(constants.ArmIsManagedImage, false)
stateBag.Put(constants.ArmIsExistingResourceGroup, false)
stateBag.Put(constants.ArmIsResourceGroupCreated, true)
stateBag.Put("ui", packersdk.TestUi(t))
testSubject.Cleanup(stateBag)
if deleteDiskCounter != 1 {
t.Fatalf("Expected DeployTemplate Cleanup to invoke deleteDisk 1 times, but invoked %d times", deleteDiskCounter)
}
}
func createTestStateBagStepDeployTemplate() multistep.StateBag {
stateBag := new(multistep.BasicStateBag)
stateBag.Put(constants.ArmDeploymentName, "Unit Test: DeploymentName")
stateBag.Put(constants.ArmResourceGroupName, "Unit Test: ResourceGroupName")
stateBag.Put(constants.ArmComputeName, "Unit Test: ComputeName")
return stateBag
}
func createTestStepDeployTemplateDeleteOSImage(deleteDiskCounter *int) *StepDeployTemplate {
return &StepDeployTemplate{
deploy: func(context.Context, string, string) error { return nil },
say: func(message string) {},
error: func(e error) {},
deleteDisk: func(ctx context.Context, imageType string, imageName string, resourceGroupName string) error {
*deleteDiskCounter++
return nil
},
disk: func(ctx context.Context, resourceGroupName, computeName string) (string, string, error) {
return "Microsoft.Compute/disks", "", nil
},
delete: func(ctx context.Context, deploymentName, resourceGroupName string) error {
return nil
},
deleteDeployment: func(ctx context.Context, state multistep.StateBag) error {
return nil
},
}
}

Some files were not shown because too many files have changed in this diff Show More